=Paper= {{Paper |id=None |storemode=property |title=Average Depth and Number of Misclassifications for Decision Trees |pdfUrl=https://ceur-ws.org/Vol-928/0160.pdf |volume=Vol-928 |dblpUrl=https://dblp.org/rec/conf/csp/ChikalovHM12 }} ==Average Depth and Number of Misclassifications for Decision Trees== https://ceur-ws.org/Vol-928/0160.pdf
Average Depth and Number of Misclassifications
             for Decision Trees

             Igor Chikalov, Shahid Hussain, and Mikhail Moshkov

           Mathematical and Computer Sciences & Engineering Division
               King Abdullah University of Science and Technology
                       Thuwal 23955-6900, Saudi Arabia
       {igor.chikalov, shahid.hussain, mikhail.moshkov}@kaust.edu.sa




      Abstract. This paper presents a new tool for the study of relationships
      between total path length or average depth and number of misclafficia-
      tions for decision trees. In addition to algorithm, the paper also presents
      the results of experiments with datasets from UCI ML Repository [1].



1   Introduction

Decision trees are widely used as predictors, as a way of knowledge represen-
tation, and as algorithms for problem solving. There uses require optimizing
decision trees for certain cost functions such as the number of misclassifications,
depth/average depth, and number of nodes. That is minimizing one of these cost
functions yields more accurate, faster, or more understandable decision trees (re-
spectively).
    We have created a software system for decision trees (as well as decision
rules) called dagger—a tool based on dynamic programming which allows us
to optimize decision trees (and decision rules) relative to various cost functions
such as depth (length), average depth (average length), total number of nodes,
and number of misclassifications sequentially [2–5]. The aim of this paper is to
study the relationships between total path length (average depth) and number
of misclaffications for decision trees and present a new tool (an extension of our
software) for computing such relationships. We also consider the work of this
tool on decision tables from UCI ML Repository [1].
    The presented algorithm and its implementation in the software tool Dagger
together with similar algorithms devised by the authors (see for example [6]) can
be useful for investigations in Rough Sets [7, 8] where decision trees are used as
classifiers [9].
    This paper is divided into six sections including the Introduction. Section 2
presents basic notions about decision tables/trees and cost functions. In Sect. 3
an algorithm for constructing a directed acyclic graph for representation of de-
cision trees is presented. Section 4 presents the main algorithm and results for
this paper. Section 5 shows experimental results of the work of the algorithm
presented in this paper on different datasets and we conclude the paper in Sect. 6.
2     Basic Notions
In the following section we define the main notions related with the study of
decision trees and tables and different cost functions for the decision tree con-
struction.

2.1    Decision Tables and Decision Trees
In this paper, we consider only decision tables with discrete attributes. These
tables do not contain missing values and equal rows. Consider a decision ta-
ble T depicted in Fig. 1. Here f1 , . . . , fm are the names of columns (conditional


                                         f1 . . . fm d
                                        b11 . . . b1m c1
                                             ...        ...
                                        bN 1 . . . bN m cN


                                    Fig. 1. Decision table


attributes); c1 , . . . , cN are nonnegative integers which can be interpreted as de-
cisions (values of the decision attribute d); bij are nonnegative integers which
are interpreted as values of conditional attributes (we assume that the rows
(b11 , . . . , b1m ), . . . , (bN 1 , . . . , bN m ) are pairwise different). We denote by E(T )
the set of attributes (columns of the table T ), each of which contains different
values. For fi ∈ E(T ) let E(T, fi ) be the set of values from the column fi .
    Let fi1 , . . . , fit ∈ {f1 , . . . , fm } and a1 , . . . , at be nonnegative integers. We
denote by T (fi1 , a1 ) . . . (fit , at ) the subtable of the table T , which consists of
such and only such rows of T that at the intersection with columns fi1 , . . . , fit
have numbers a1 , . . . , at respectively. Such nonempty tables (including the table
T ) will be called separable subtables of the table T . For a subtable Θ of the
table T , we will denote by R(Θ) the number of unordered pairs of rows that are
labeled with different decisions. A minimum decision value which is attached to
the maximum number of rows in a nonempty subtable Θ will be called the most
common decision for Θ.
    A decision tree Γ over the table T is a finite directed tree with root in which
each terminal node is labeled with a decision. Each nonterminal node is labeled
with a conditional attribute, and for each nonterminal node the outgoing edges
are labeled with pairwise different nonnegative integers. Let v be an arbitrary
node of Γ . We now define a subtable T (v) of the table T . If v is the root then
T (v) = T . Let v be a node of Γ that is not the root, nodes in the path from
the root to v be labeled with attributes fi1 , . . . , fit , and edges in this path be
labeled with values a1 , . . . , at respectively. Then T (v) = T (fi1 , a1 ), . . . , (fit , at ).
    Let Γ be a decision tree over T . We will say that Γ is a decision tree for T
if any node v of Γ satisfies the following conditions:
 – If R(T (v)) = 0 then v is a terminal node labeled with the most common
   decision for T (v);
 – Otherwise, v is either a terminal node labeled with the most common decision
   for T (v), or v is labeled with an attribute fi ∈ E(T (v)) and if E(T (v), fi ) =
   {a1 , . . . , at }, then t edges leave node v, and these edges are labeled with
   a1 , . . . , at respectively.
    Let Γ be a decision tree for T . For any row r of T , there exists exactly one
terminal node v of Γ such that r belongs to the table T (v). Let v be labeled with
the decision b. We will say about b as about the result of the work of decision
tree Γ on r. We denote by N (T (v)) the number of rows in the subtable T (v)
and N (T (v), b) the number of rows in T (v) labeled with decision b.
    For an arbitrary row r of the decision table T , we denote by l(r) the length
of path from the root to a terminal node v of T such that r is in T (v). We say
that the total path length, represented as Λ(Y, Γ ), is the sum of path lengths l(r)
for all rows r in T . That is
                                           X
                                Λ(T, Γ ) =    l(r),
                                               r

where we take the sum on all rows r of the table T . Note that the average depth
of Γ relative to T , represented as havg (T, Γ ) is equal to the total path length
divided by the total number of rows in T i.e.,
                                               Λ(T, Γ )
                              havg (T, Γ ) =            .
                                                N (T )

We will drop T when it is obvious from the context. That is, we will write Λ(Γ )
instead of Λ(T, Γ ) if T is known.
    The number of misclassifications for decision tree Γ for the table T , denoted
as µ(Γ ) = µ(T, Γ ), is the number of rows r in T for which the result of the
work of decision tree Γ on r is not equal to the decision attached to the row r.
We should note that the cost functions Λ and µ are bounded above by values
depending upon the size of the table. That is, mN and N are upper bounds for
Λ and µ for a decision table with m conditional attributes and N rows.


3   Representation of Sets of Decision Trees
Consider an algorithm for construction of a graph ∆(T ), which represents the
set of all decision trees for the table T . Nodes of this graph are some separable
subtables of the table T . During each step we process one node and mark it with
the symbol *. We start with the graph that consists of one node T and finish
when all nodes of the graph are processed.
    Let the algorithm has already performed p steps. We now describe the step
number (p + 1). If all nodes are processed then the work of the algorithm
is finished, and the resulting graph is ∆(T ). Otherwise, choose a node (ta-
ble) Θ that has not been processed yet. If R(Θ) = 0, label the considered
node with the common decision b for Θ, mark it with symbol * and proceed
to the step number (p + 2). If R(Θ) > 0, then for each fi ∈ E(Θ) draw
a bundle of edges from the node Θ (this bundle of edges will be called fi -
bundle). Let E(Θ, fi ) = {a1 , . . . , at }. Then draw t edges from Θ and label
these edges with pairs (fi , a1 ), . . . , (fi , at ) respectively. These edges enter into
nodes Θ(fi , a1 ), . . . , Θ(fi , at ). If some of the nodes Θ(fi , a1 ), . . . , Θ(fi , at ) are
not present in the graph then add these nodes to the graph. Mark the node Θ
with the symbol * and proceed to the step number (p + 2).


                                                                    fm
                                                                     i
                                                               a1      @at
                                                                    ... @
                                                                        R

                     bm
                                                               J         J
                                                                        
                                                             Γ1 JJ      Γt JJ

      Fig. 2. Trivial decision tree                 Fig. 3. Aggregated decision tree


    Now for each node Θ of the graph ∆(T ), we describe the set of decision
trees corresponding to the node Θ. We will move from terminal nodes, which are
labeled with numbers, to the node T . Let Θ be a node, which is labeled with a
number b. Then the only trivial decision tree depicted in Fig. 2 corresponds to
the node Θ.
    Let Θ be a nonterminal node (table) then there is a number of bundles
of edges starting in Θ. We consider an arbitrary bundle and describe the set
of decision trees corresponding to this bundle. Let the considered bundle be
an fi -bundle where fi ∈ (Θ) and E(Θ, fi ) = {a1 , . . . , at }. Let Γ1 , . . . , Γt be
decision trees from sets corresponding to the nodes Θ(fi , a1 ), . . . , Θ(fi , at ). Then
the decision tree depicted in Fig. 3 belongs to the set of decision trees, which
correspond to this bundle. All such decision trees belong to the considered set,
and this set does not contain any other decision trees. Then the set of decision
trees corresponding to the node Θ coincides with the union of sets of decision
trees corresponding to the bundles starting in Θ and the set containing one
decision tree depicted in Fig. 2, where b is the most common decision for Θ. We
denote by D(Θ) the set of decision trees corresponding to the node Θ.
    The following proposition shows that the graph ∆(T ) can represent all deci-
sion trees for the table T .

Proposition 1. Let T be a decision table and Θ a node in the graph ∆(T ).
Then the set D(Θ) coincides with the set of all decision trees for the table Θ.


4    Relationships

In the following, we consider relationships between average depth (total path
length) and number of misclassifications for decision trees and give an algorithm
to compute the relationships. We also provide an illustration of working of the
algorithm on an example decision table.
    Let T be a decision table with N rows and m columns labeled with f1 , . . . , fm .
Let Θ be a node of ∆(T ) and D(Θ) be the set of all decision trees for Θ (as
discussed in Sec. 2). We will use the notion of total path length instead of average
depth for clarity and ease of implementation.
    We define two sets BΛ = {0, 1, . . . , mN } and Bµ = {0, 1, . . . , N } and func-
tions on these sets, GΘ : BΛ → Bµ and FΘ : Bµ → BΛ as following:

                FΘ = min{µ(Γ ) : Γ ∈ D(Θ), Λ(Γ ) ≤ n}, n ∈ BΛ ,
                 GΘ = min{Λ(Γ ) : Γ ∈ D(Θ), µ(Γ ) ≤ n}, n ∈ Bµ .

    We now describe an algorithm which allows us to construct the function FΘ
for every node Θ from the graph ∆(T ). We begin from terminal nodes and move
upward to the node T .
    Let Θ be a terminal node. It means that all rows of Θ are labeled with the
same decision b and the decision tree Γb as depicted in Fig. 2 belongs to D(Θ).
It is clear that Λ(Γb ) = µ(Γb ) = 0 for the table Θ. Therefore FΘ (n) = 0 for any
n ∈ BΛ .
    Let us consider a nonterminal node Θ and a bundle of edges, which start
from this node. Let these edges be labeled with the pairs (fi , a1 ), . . . , (fi , at ) and
enter into the nodes Θ(fi , a1 ), . . . , Θ(fi , aj ), respectively, to which the functions
FΘ(fi ,a1 ) , . . . , FΘ(fi ,at ) are already attached.
                                                                         fi
    We correspond to this bundle (fi -bundle) the function FΘ               for any n ∈ BΛ
n ≥ N (Θ),
                                                  t
                                     fi
                                                  X
                                   FΘ   (n) = min    FΘ(fi ,aj ) (nj ),
                                            j=1

where the minimum is taken over all n1 , . . . , nt such that nj ∈ BΛ for j = 1, . . . , t
                                                                               fi
and n1 + · · · + nt ≤ n − N (Θ). [It should be noted that computing FΘ             is a
nontrivial task. We describe the method in detail in the following subsection.]
Furthermore, we know that there is only one decision tree in D(Θ) for which
the total path length is at most N (Θ) − 1. This is the tree Γb as depicted in
Fig. 2, where b is the most common decision for Θ, therefore, for any n ∈ BΛ ,
n < N (Θ)
                             fi
                          FΘ    (n) = N (Θ) − N (Θ, b).
   We can use the following proposition to construct the function GT (using the
method of transformation of functions described in the Appendix).

Proposition 2. For any n ∈ Bµ ,

                         GT (n) = min{p ∈ BΛ : FT (p) ≤ n}.

   Note that to find the value GT (n) for some n ∈ Bµ it is enough to make
O(log |Bµ |) = O(log N ) operations of comparisons.
                  fi
4.1    Computing FΘ

Let Θ be a nonterminal node in ∆(T ), fi ∈ E(Θ) and E(Θ, fi ) = {a1 , . . . , at }.
Furthermore, we assume that the functions FΘ(fi ,aj ) , for j = 1, . . . , t have al-
ready been computed. Let the values of FΘ(fi ,aj ) be given by the tuple of pairs
                                                                          
                               (0, µj0 ), (1, µj1 ), . . . , (mN, µjmN )

here µjk = FΘ(fi ,aj ) (k) for k = 0, . . . , mN . We need to compute FΘ
                                                                       fi
                                                                          for all
n ∈ BΛ , n ≥ N (Θ),
                                             t
                             fi
                                          X
                           FΘ   (n) = min      FΘ(fi ,at ) (nj ),
                                                 j=1

for nj ∈ BΛ such that n1 + · · · + nt ≤ n − N (Θ).
      We construct a layered directed acyclic graph (DAG) δ(Θ, fi ) to compute
  fi
FΘ as following. The DAG δ(Θ, fi ) contains nodes arranged in t + 1 layers
(l0 , l1 , . . . , lt ). Each node has a pair of labels and each layer lj (1 ≤ j ≤ t)
contains jmN + 1 nodes. The first entry of labels for nodes in a layer lj is an
integer from {0, 1, . . . , jmN }. The layer l0 contains only one node labeled with
(0, 0).
      Each node in layer lj (0 ≤ j < t) has mN + 1 outgoing edges to nodes in
layer lj+1 . These edges are labeled with corresponding pairs in FΘ(fi ,aj+1 ) . A
node with label x as a first entry in its label-pair in a layer lj connects to nodes
with labels x + 0 to x+ mN (as a first entry in their label-pairs)
                                                                         in layer lj+1 ,
with edges labeled as (0, µj+1       j+1                j+1
                           0 ), (1, µ1 ), . . . , (mN, µmN ) , respectively.
                   fi
    The function FΘ   (n) for n ∈ BΛ \ {0} can be easily computed using the DAG
δ(Θ, fi ) for Θ ∈ ∆(T ) and for the considered bundle of edges for the attribute
fi ∈ E(Θ) as following:
    Each node in layer l1 gets its second value copied from the corresponding
second value from the incoming edge label to the node (since there is only one
incoming edge for each node in layer l1 ). Let (k, µ) be a node in layer lj , (2 ≤
j ≤ t). Let E = {(λ1 , µ1 ), (λ2 , µ2 ), . . . , (λr , µr )} be the set of incoming nodes
to (k, µ) such that (α1 , β1 ), (α2 , β2 ), . . . , (αr , βr ) are the labels of these edges
between the nodes in E and (k, µ), respectively. It is clear that k = λi + αi ,
(1 ≤ i ≤ r). We set
                                 µ = min {µi + βi }.
                                            1≤i≤r

We do this for every node layer-by-layer till all nodes in δ(Θ, fi ) have received
their second label.
    Once we finish computing the second value of label-pairs of layer lt , we can
                                fi
use these labels to compute FΘ     (n). Let (k1 , µ1 ), . . . , (ks , µs ) be all label-pairs
to the nodes in lt . One can show that for all n ∈ BΛ , n ≥ N (Θ),
                 fi
                FΘ  (n) = min{µq : q ∈ {1, . . . , s}, kq ≤ n − N (Θ)}.
   An example of working of the algorithm can be found in the Fig. 4. It is
important to note that in this example we only show the values for total path
lengths possible for the set of trees that can be constructed for the given subtable
(table) and their corresponding number of misclassifications.


                                        Λ             0        4
                                        µ             2        0
                                                                        f1       f2   f3        d
                                                                        0        0    1         0
             Λ       0                                                  1        0    1         1                                                   Λ       0
             µ       0                                                                                                                              µ       0
                                                  (f 1 =
                                                         0)             2        1    0         0            (f1 =
                                                                                                                      2)
            f1   f2          f3    d                                    3        1    0         1                                     f1       f2   f3          d
            0    0           1     0                                                                                                  2        1    0           0
                                                                                                                (f
                                                          1)                                                      1
                                                      =                                                               =
                                                 f1                                                                        3)
                                             (

                                                                            1)




                                                                                       (f 2
             Λ       0                                                                                                                              Λ       0
                                                                          3 =
             µ       0                                                                                                                              µ       0




                                                                                            =
                                                                                           1) , (
                                                                        ), (f
 (f 1




                                                                                                                                                                       2)
            f1   f2          f3    d                                                                                                  f1       f2       f3      d




                                                                                                                                                                        =
                                                                    =0




                                                                                                 f3 =
      =




            1    0           1     1                                                                                                  3        1        0       1




                                                                                                                                                                    (f1
    0)




                                                                   (f2




                                                                                                 0)
                                  (f
                                    1                                                                                                     3)
                                        =                                                                                             =
                                            1)                                                                                    1
                                                                                                                               (f

                                                 f1        f2      f3      d               f1           f2     f3          d
                                                 0         0       1       0               2            1      0           0
                 Λ       0    2                                                                                                            Λ    0       2
                 µ       1    0                  1         0       1       1               3            1      0           1               µ    1       0


                              Fig. 4. Example DAG with relationship values




5         Experimental Results
We performed several experiments on datasets (decision tables) acquired from
UCI ML Repository [1]. The resulting plots are depicted in Figs. 5, 6, 7, and 8.
    It is clear to see in above figures that when the total path length is zero we
get maximum number of misclassifications (as it is the base case) which makes
the figures less clear in other regions. We remove the (0, 67) point from Fig. 5
and show the resulting plot for the average depth instead of total path length
in Fig. 9. Similarly, Fig. 10 shows in inverse of plot shown in Fig. 6 without the
base case (0, 3916).


6         Conclusions
This paper is devoted to the consideration of a new tool for decision tree study.
We present and explain in detail the algorithm to compute the relationships
between the total path length (average depth) and number of misclassifications
for decision trees. That is, for a given decision table we can find minimum number
of misclassifications for a given value for total path length when we consider the
                                                                                                              4,000

No of misclassifications   60




                                                                                   No of misclassifications
                                                                                                              3,000

                           40
                                                                                                              2,000

                           20                                                                                 1,000


                            0                                                                                    0

                                0        100         200        300         400                                       0   0.2       0.4    0.6      0.8   1       1.2
                                          Total path length                                                                         Total path length               ·104


Fig. 5. lymphography dataset (18 at-                                              Fig. 6. mushroom dataset (22 attributes
tributes and 148 rows)                                                            and 8124 rows)


                           40
                                                                                                              8,000
No of misclassifications




                                                                                   No of misclassifications
                           30
                                                                                                              6,000

                           20                                                                                 4,000

                           10                                                                                 2,000


                           0                                                                                     0

                                0   20    40        60     80   100 120 140                                           0         1          2         3        4
                                          Total path length                                                                         Total path length               ·104


Fig. 7. zoo-data dataset (16 attributes                                           Fig. 8. nursery dataset (8 attributes and
and 101 rows)                                                                     12960 rows)


                           15
No of misclassifications




                                                                                                                1.4
                                                                                  Average depth




                           10                   1                                                                                      1


                                                                                                                1.2
                            5



                            0                                                                                     1

                                1         1.5              2          2.5                                             0   20         40        60    80   100     120
                                               Average depth                                                                No. of Misclassifications


Fig. 9. lymphography dataset (18 at-                                              Fig. 10. mushroom dataset (22 attributes
tributes and 148 rows)                                                            and 8124 rows)




                                                1                                                                                      1
exact decision trees. Future studies will be connected with the extension of this
tool to other cost functions specially the relationships between the depth and
average depth of decision trees.


References
1. Frank, A., Asuncion, A.: UCI Machine Learning Repository (2010)
2. Alkhalid, A., Chikalov, I., Moshkov, M.: On algorithm for building of optimal α-
   decision trees. In Szczuka, M.S., Kryszkiewicz, M., Ramanna, S., Jensen, R., Hu,
   Q., eds.: RSCTC 2010, LNCS (LNAI). Volume 6086., Heidelberg, Springer (2010)
   438–445
3. Alkhalid, A., Chikalov, I., Moshkov, M.: A tool for study of optimal decision trees.
   In Yu, J., Greco, S., Lingras, P., Wang, G., Skowron, A., eds.: RSKT. Volume LNCS
   6401., Springer (2010) 353–360
4. Alkhalid, A., Chikalov, I., Hussain, S., Moshkov, M. In: Extensions of dynamic
   programming as a new tool for decision tree optimization. Volume 13 of SIST.
   Springer, Heidelberg (2012) 16–36
5. Alkhalid, A., Amin, T., Chikalov, I., Hussain, S., Moshkov, M., Zielosko, B.: Dagger:
   a tool for analysis and optimization of decision trees and rules. In: Computational
   Informatics, Social Factors and New Information Technologies: Hypermedia Per-
   spectives and Avant-Garde Experiencies in the Era of Communicability Expansion.
   Blue Herons (2011) 29–39
6. Chikalov, I., Hussain, S., Moshkov, M.: Relationships between depth and number
   of missclassifications for decision trees. In Kuznetsov, S.O., Slezak, D., Hepting,
   D.H., Mirkin, B., eds.: Thirteenth International Conference on Rough Sets, Fuzzy
   Sets, Data Mining and Granualr Computing (RSFDGrC 2011). Volume LNCS 6743.,
   Springer (2011) 286–292
7. Pawlak, Z.: Theoretical Aspects of Reasoning about Data. Kluwer Academic Pub-
   lishers, Dordrecht (1991)
8. Skowron, A., Rauszer, C.: The discernibility matrices and functions in informa-
   tion systems. In Slowinski, R., ed.: Intelligent Decision Support. Handbook of Ap-
   plications and Advances of the Rough Set Theory, Dordrecht, Kluwer Academic
   Publishers (1992) 331–362
9. Nguyen, H.S.: From optimal hyperplanes to optimal decision trees. Fundamenta
   Informaticae 34(1-2) (1998) 145–174



Appendix: Transformation of Functions
Let f and g be two functions from a set A onto Cf and Cg respectively, where Cf
and Cg are finite sets of nonnegative integers. Let Bf = {mf , mf + 1, . . . , Mf }
and Bg = {ng , ng + 1, . . . , Ng } where mf = min{m : m ∈ Cf } and ng = min{n :
n ∈ Cg }. Furthermore, Mf and Ng are natural numbers such that m ≤ Mf and
n ≤ Ng for any m ∈ Cf and n ∈ Cg , respectively.
   We define two functions F : Bg → Bf and G : Bf → Bg as following:

                 F(n) = min{f (a) : a ∈ A, g(a) ≤ n}, ∀n ∈ Bg ,                     (1)
                 G(m) = min{g(a) : a ∈ A, f (a) ≤ m}, ∀m ∈ Bf .                     (2)
It is clear that both F and G are nonincreasing functions.
    The following proposition states that the functions F and G can be used
interchangeably and we can evaluate F using G and vice versa, i.e., it is enough
to know only one function to evaluate the other.

Proposition 3. For any n ∈ Bg ,

                       F(n) = min{m ∈ Bf : G(m) ≤ n},

and for any m ∈ Bf ,

                       G(m) = min{n ∈ Bg : F(n) ≤ m}.

Proof. Let for some n ∈ Bg
                                   F(n) = m0 .                                (3)
Furthermore, we assume that

                         min{m ∈ Bf : G(m) ≤ n} = t.                          (4)

   From (3) it follows that

 (i) there exists b ∈ A such that g(b) ≤ n and f (b) = m0 ;
(ii) for any a ∈ A if g(a) ≤ n then f (a) ≥ m0 .

    From (i) it follows that G(m0 ) ≤ n. This implies t ≤ m0 . Let us assume that
t < m0 . In this case, there exits m1 < m0 for which G(m1 ) ≤ n. Therefore, there
exists a ∈ A such that f (a) ≤ m1 and g(a) ≤ n, but from (ii) it follows that
f (a) ≥ m0 , which is impossible. So t = m0 .
    Similarly we can prove the second part of the statement.

    Proposition 3 allows us to transform the function G given by a tuple
(G(mf ), G(mf + 1), . . . , G(Mf )) into the function F and vice versa. We know
that G(mf ) ≥ G(mf + 1) ≥ · · · ≥ G(Mf ), to find the minimum m ∈ Bf such
that G(m) ≤ m we can use binary search which requires O(log|Bf |) comparisons
of numbers. So to find the value F(n) for n ∈ Bg it is enough to make O(log|Bf |)
operations of comparison.