=Paper= {{Paper |id=None |storemode=property |title=Greedy Algorithm for Construction of Decision Trees for Tables with Many-Valued Decisions |pdfUrl=https://ceur-ws.org/Vol-928/0013.pdf |volume=Vol-928 |dblpUrl=https://dblp.org/rec/conf/csp/AzadCMZ12 }} ==Greedy Algorithm for Construction of Decision Trees for Tables with Many-Valued Decisions== https://ceur-ws.org/Vol-928/0013.pdf
 Greedy Algorithm for Construction of Decision
  Trees for Tables with Many-Valued Decisions

 Mohammad Azad1 , Igor Chikalov1 , Mikhail Moshkov1 , and Beata Zielosko1,2
          1
           Mathematical and Computer Sciences & Engineering Division
              King Abdullah University of Science and Technology
                        Thuwal 23955-6900, Saudi Arabia
{mohammad.azad,igor.chikalov,mikhail.moshkov,beata.zielosko}@kaust.edu.sa
              2
                Institute of Computer Science, University of Silesia
                  39, Bȩdzińska St., 41-200 Sosnowiec, Poland



      Abstract. In the paper, we study a greedy algorithm for construction
      of approximate decision trees. This algorithm is applicable to decision
      tables with many-valued decisions where each row is labeled with a set
      of decisions. For a given row, we should find a decision from the set
      attached to this row. We use an uncertainty measure which is the number
      of boundary subtables. We present also experimental results for data
      sets from UCI Machine Learning Repository for proposed approach and
      approach based on generalized decision.

      Keywords: decision table with many-valued decisions, decision tree,
      greedy algorithm


1   Introduction
In the paper, we study decision trees for decision tables with many-valued de-
cisions. We can meet such tables when we work with experimental or statistical
data. In such data sets, we often have groups of rows with equal values of con-
ditional attributes but, probably, different values of the decision attribute. In
this case, instead of a group of rows, we can consider one row given by values
of conditional attributes, and we attach to this row a set of decisions: either all
decisions for rows from the group, or k the most frequent decisions for rows from
the group [5]. As a result we obtain a decision table with many-valued decisions.
    In the rough sets theory [9, 13] a generalized decision is often used to work
with decision tables which have equal rows labeled with different decisions (in-
consistent decision tables). In this case, we also work with the decision table
with many-valued decisions. The set of decisions attached to equal rows is called
the generalized decision for each of these equal rows [10, 11]. The usual way is
to find for a given row its generalized decision. We will call this approach as
approach based on generalized decision.
    In our approach, we study a problem of finding of an arbitrary decision or
one of the most frequent decisions from the set of decisions. Proposed approach
was considered in [2, 7] for construction of tests (super-reducts), and in [3] for
2       Mohammad Azad, Igor Chikalov, Mikhail Moshkov, and Beata Zielosko

construction of decision rules. In this paper, we propose this approach for con-
struction of decision trees. It can be useful from the point of view of knowledge
representation. If it is enough to find only one decision then a decision tree
can have less nodes in comparison with a decision tree constructed using the
generalized decision approach. In this case, proposed decision tree will be more
understandable. Besides, if we consider a decision tree as a way for construction
of rules then the number of rules for proposed approach can be less than the
number of rules for approach based on generalized decision.
    Decision tables with many-valued decisions can be considered as decision
tables with an incomplete information because we don’t know which decision
should be chosen from the set of decisions. However, an incomplete information
can be interpreted also in situations where instead of a single value of conditional
attribute we have a subset of values of the attribute domain or missing values
of attributes. Z. Pawlak [9] and E. Orlowska [8] proposed Non-deterministic
Information System for dealing with an incomplete information. J. Quinlan in
[12] presented extensions to ID3 algorithm to cope with unknown values of at-
tributes. W. Cohen in [4] proposed algorithm for learning trees and rules for
decision tables with many-valued attributes which are represented as “feature
vectors”. In [14] authors presented, for incomplete decision system, an algorithm
for construction of decision trees and classification rule extraction, based on
relationship between attributes.
    In this paper, we present a greedy algorithm for construction of decision trees
for decision tables with many-valued decisions. Theoretical results connected
with a greedy algorithm for construction of approximate decision trees were
presented in [5, 6]. To define the notion of approximate decision tree we use an
uncertainty measure B(T ) for decision tables with many-valued decisions. We
consider so-called α-decision trees where α is a real number such that 0 ≤ α < 1.
For a given row r of a table T , an α-decision tree localizes it in a subtable of T
with uncertainty at most αB(T ). If α equals 0, we have an exact decision tree
for T .
    We present an upper bound on the number of steps of the considered algo-
rithm. From this bound it follows that, for an arbitrary natural t, the greedy
algorithm has polynomial time complexity on tables which have at most t deci-
sions in each set of decisions attached to rows.
    In this paper, we consider only binary decision tables with many-valued deci-
sions. However, the obtained results can be extended to the decision tables filled
by numbers from the set {0, . . . , k − 1}, where k ≥ 3. One of the aims of this
paper is to make some comparative study relative to depth and average depth of
decision trees for proposed approach and approach based on generalized decision.
We present experimental results for data sets from UCI ML Repository.
    This paper consists of six sections. Section 2, contains main notions. In
Sect. 3, we present a theorem for decision tables with at most t decisions in
each set of decisions attached to rows. In Sect. 4, we study a greedy algorithm
for α-decision tree construction. Section 5 contains experimental results, Sect. 6
– conclusions.
                          Greedy Algorithm for Construction of Decision Trees         3

2      Main Notions
In this section, we consider definitions of notions corresponding to decision tables
with many-valued decisions.
    A (binary) decision table with many-valued decisions is a rectangular table
T filled by numbers from the set {0, 1}. Columns of this table are labeled with
attributes f1 , . . . , fn . Rows of the table are pairwise different, and each row is
labeled with a nonempty finite set of natural numbers (set of decisions). Note
that each (binary) decision table with one-valued decisions can be interpreted
also as a decision table with many-valued decisions. In such table, each row is
labeled with a set of decisions which has one element.
    We will say that T is a degenerate table if either T has no rows, or the
intersection of sets of decisions attached to rows of T is nonempty.
    A decision which belongs to the maximum number of sets of decisions at-
tached to rows in T is called the most common decision for T . If we have more
than one such decisions we choose the minimum one. If T is empty then 1 is the
most common decision for T .
    A table obtained from T by removal of some rows is called a subtable of T . A
subtable T 0 of T is called a boundary subtable if T 0 is not degenerate but each
proper subtable of T 0 is degenerate. We denote by B(T ) the number of boundary
subtables of the table T . It is clear that T is a degenerate table if and only if
B(T ) = 0. The value B(T ) will be interpreted as uncertainty of T . An example
of boundary subtables for decision table T0 with many-valued decisions can be
found in Fig. 1.

                                            f1 f2 f3 d
                                       r1   0 0 0 {1}
                                       r    0 1 1 {1, 2}
                                  T0 = 2
                                       r3   1 0 1 {1, 3}
                                       r4   1 1 0 {2, 3}
                                       r5   0 0 1 {2}
          f1 f2 f3 d
                              f1 f2 f3 d        f1 f2 f3 d           f1 f2 f3 d
       r2 0 1 1 {1, 2}
T1 =                   T = r1 0 0 0 {1} T3 = r3 1 0 1 {1, 3} T4 = r1 0 0 0 {1}
       r3 1 0 1 {1, 3} 2
                           r4 1 1 0 {2, 3}   r5 0 0 1 {2}         r5 0 0 1 {2}
       r4 1 1 0 {2, 3}



           Fig. 1. Boundary subtables T1 , T2 , T3 , T4 of the decision table T0


   In the case of generalized decision approach we consider sets of decisions at-
tached to rows of the table T0 as values of the decision attribute (see Fig. 2). The
number of boundary subtables for such approach is equal to 10. Each boundary
subtable of the table T0 has 2 rows (see Theorem 1).
   Let fi1 , . . . , fim ∈ {f1 , . . . , fn } and δ1 , . . . , δm ∈ {0, 1}. We denote by
                                 T (fi1 , δ1 ) . . . (fim , δm )
the subtable of the table T which consists of all rows that at the intersection
with columns fi1 , . . . , fim have numbers δ1 , . . . , δm respectively.
4       Mohammad Azad, Igor Chikalov, Mikhail Moshkov, and Beata Zielosko

                         f1 f2 f3 d     d
                    r1   0 0 0 {1}      1
                    r    0 1 1 {1, 2}   2
               T0 = 2
                    r3   1 0 1 {1, 3}
                                      ⇒ 3
                    r4   1 1 0 {2, 3}   4
                    r5   0 0 1 {2}      5



Fig. 2. Transformation of decision attribute from the table T0 for generalized decision
approach



    A decision tree over T is a finite tree with root in which each terminal node is
labeled with a decision (a natural number), and each nonterminal node is labeled
with an attribute from the set {f1 , . . . , fn }. Two edges start in each nonterminal
node. These edges are labeled with 0 and 1 respectively.
    Let Γ be a decision tree over T and v be a node of Γ . We attach to the
node v a subtable T (v) of the table T . If v is the root of Γ then T (v) = T .
Otherwise, let nodes and edges in the path from the root to v be labeled with
attributes fi1 , . . . , fim and numbers δ1 , . . . , δm respectively. Then T (v) is the
subtable T (fi1 , δ1 ) . . . (fim , δm ) of the table T .
    It is clear that for any row r of T there exists exactly one terminal node v in
Γ such that r belongs to T (v). The decision attached to v will be considered as
the result of the work of Γ on the row r.
    Let α be a real number such that 0 ≤ α < 1. We will say that Γ is an
α-decision tree for T if for any terminal node v of Γ , the inequality B(T (v)) ≤
αB(T ) holds, and the node v is labeled with the most common decision for T (v).
An example of an exact decision tree constructed for the table T0 can be found
in Fig. 3, a 0.25-decision tree constructed for the table T0 can be found in Fig. 5.

                                       
                                         f1
                                      0  Q1
                                
                                 +
                                           
                                            s
                                            Q
                                  3           f3
                                0
                                     Q1
                                 
                                  +
                                     
                                      s
                                      Q
                                        1            2
                                                 

                               Fig. 3. Exact decision tree




   We denote by h(Γ ) the depth of a decision tree Γ which is the maximum
length of a path from the root to a terminal node. We denote by hα (T ) the
minimum depth of an α-decision tree for the table T . We denote by l(δ) a total
path length for an arbitrary row δ of the table T . It is a length of the path from
the root to a terminal node of Γ which accepts δ. An average depth of Γ is equal
                       Greedy Algorithm for Construction of Decision Trees        5

to the total path length divided by N (T ) – the number of rows in decision table
T.


3   Set of Tables T ab(t)

We denote by T ab(t), where t is a natural number, the set of decision tables with
many-valued decisions such that each row in the table has at most t decisions.

Theorem 1. [6] Each boundary subtable of a table T ∈ T ab(t) has at most t + 1
rows.

   Therefore, for tables from T ab(t), there exists a polynomial algorithm for the
computation of parameter B(T ). For example, for any decision table T with one-
valued decisions (really, for any table from T ab(1)) the equality B(T ) = P (T )
holds where P (T ) is the number of unordered pairs of rows of T with different
decisions.


4   Greedy Algorithm Uα

To construct an α-decision tree for T , we first need to construct all boundary
subtables of decision table T and to compute the value of B(T ).
    Let α be a real number such that 0 ≤ α < 1. Let T have n columns labeled
with attributes f1 , . . . , fn . We now describe an algorithm Uα which for a given
decision table with many-valued decisions constructs an α-decision tree Uα (T )
for the table T .

 – Step 1. Construct a tree consisting of a single node labeled with the table T
   and proceed to the second step.
 – Suppose t ≥ 1 steps have been already made. The tree obtained at the step
   t will be denoted by G.
 – Step (t + 1). If no one node of the tree G is labeled with a table then we
   denote by Uα (T ) the tree G. The work of the algorithm Uα is completed.
 – Otherwise, we choose a node v in the tree G which is labeled with a subtable
   of the table T . Let the node v be labeled with the table T 0 .
   If B(T 0 ) ≤ αB(T ) then instead of T 0 we mark the node v with the most
   common decision for T 0 and proceed to the step (t + 2).
   Let B(T 0 ) > αB(T ). Then for i = 1, . . . , n, we compute the value

                      Q(fi ) = max{B(T 0 (fi , 0)), B(T 0 (fi , 1))}.

    Instead of T 0 we mark the node v with the attribute fi0 where i0 is the
    minimum i for which Q(fi ) has the minimum value. For each δ ∈ {0, 1}, we
    add to the tree G the node v(δ), mark this node with the subtable T 0 (fi0 , δ),
    draw the edge from v to v(δ), and mark this edge with δ. Proceed to the
    step (t + 2).
6       Mohammad Azad, Igor Chikalov, Mikhail Moshkov, and Beata Zielosko

                                          
                                           f1
                                     
                                      HH 0
                                   1
                                       H
                               d             jd
                               
                                          HH
                   f1 f2 f3                          f1 f2 f3
                    1 0 1 {1, 3}                     0 0 0      {1}
                    1 1 0 {2, 3}                     0 1 1 {1, 2}
                                                     0 0 1 {2}


                      Fig. 4. Construction of 0.25-decision tree



    Now we present an example of the greedy algorithm U0.25 for construction of
a 0.25-decision tree, for decision table T0 . All boundary subtables of the decision
table T0 are depicted in Fig. 1.
    The table T0 is not degenerate, so for i ∈ {1, 2, 3}, we compute the value of
Q(fi ): Q(f1 ) = max{1, 0} = 1, Q(f2 ) = max{2, 0} = 2, Q(f3 ) = max{1, 1} = 1.
We choose the attribute f1 as a root of the 0.25-decision tree (see Fig. 4).
    B(T (f1 , 0)) = 1 and B(T (f1 , 1)) = 0, so B(T (f1 , 0)) ≤ 0.25×B(T ). We assign
to 0.25-decision tree the most common decision for the subtable T (f1 , 1), and
the most common decision for the subtable T (f1 , 0). The result of the algorithm
U0.25 work for the table T0 is a 0.25-decision tree depicted in Fig. 5.

                                         
                                           f1
                                        1  Q0
                                  
                                   =
                                               
                                                s
                                                Q
                                    3            1
                                              


                      Fig. 5. 0.25-decision tree for the table T0



    Now, we present an upper bound on the number of steps of algorithm Uα .

Theorem 2. [6] Let α be a real number such that 0 ≤ α < 1, and T be a
decision table with many-valued decisions. Then during the construction of the
tree Uα (T ) the algorithm Uα makes at most 2N (T ) + 1 steps where N (T ) is the
number of rows in T .

   From this theorem it follows that for any natural t the algorithm Uα has a
polynomial time complexity on the set T ab(t).
                              Greedy Algorithm for Construction of Decision Trees             7

Theorem 3. For any real α, 0 < α < 1, and for any nondegenerate decision
table with many-valued decisions T

                                                            1
                                  h(Uα (T )) < h0 (T ) ln     + 1.
                                                            α

5      Experimental Results

In this section, we present experimental results. First, we show how we con-
structed decision tables with many–valued decisions based on data sets from
UCI Machine Learning Repository. We consider a number of decision tables
from UCI Machine Learning Repository [1]. In some tables there were missing
values. Each such value was replaced with the most common value of the cor-
responding attribute. Some decision tables contain conditional attributes that
take unique value for each row. Such attributes were removed. In some tables
there were equal rows with, possibly, different decisions. In this case each group
of identical rows was replaced with a single row from the group which is labeled
with the set of decisions attached to rows from the group.


          Table 1. Characteristics of decision tables with many-valued decisions

    Decision          RowsAttr        Spectrum  Removed
    table                        #1 #2 #3#4#5#6attributes
    balance-scale-1   125     3 45 50 30        left-weight
    breast-cancer-1   193     8 169 24 0        tumor-size
    breast-cancer-5    98     4 58 40           inv-nodes,node-caps,deg-malig,
                                                breast-quad,irradiat
    cars-1            432     5 258 161 13      buying
    flags-5           171    21 159 12          zone,language,religion,circles,sunstars
    hayes-roth-data-1 39      3 22 13 4         marital status
    kr-vs-kp-5       1987    31 1564 423        katri,mulch,rimmx,skrxp,wknck
    kr-vs-kp-4       2061    32 1652 409        katri,mulch,rimmx,wknck
    lymphography-5 122       13 113 9           lymphatics,changes in node,changes in stru,
                                                special forms,no of nodes in
    mushroom-5        4078   17 4048 30         odor,gill-size,stalk-root,
                                                stalk-surface-below-ring,habitat
    nursery-4          240    4 97 96 47        parents,housing,finance,social
    nursery-1         4320    7 28581460 2      parents
    spect-test-1       164   21 161 3           F3
    teeth-1             22    7 12 10           top incisors
    teeth-5             14    3   6   3 0 5 0 2 bottom incisors,top canines,bottom canines,
                                                top premolars,bottom molars
    tic-tac-toe-4     231     5 102 129         top-right-square,middle-middle-square,
                                                bottom-left-square,bottom-right-square
    tic-tac-toe-3     449     6 300 149         middle-middle-square,bottom-left-square,
                                                bottom-right-square
    zoo-data-5         42    11 36 6            feathers,backbone,breathes,legs,tail




    To obtain rows which are labeled with sets containing more than one decision
we removed from decision tables more conditional attributes. The information
about such decision tables can be found in Table 1. This table contains name
of initial table, number of rows (column “Rows”), number of attributes (column
8       Mohammad Azad, Igor Chikalov, Mikhail Moshkov, and Beata Zielosko

“Attr”), spectrum of this table (column “Spectrum”), and names of removed at-
tributes (column “Removed attributes”). Spectrum of decision table with many-
valued decisions is a sequence #1, #2,. . . , where #i, i = 1, 2, . . ., is the number
of rows labeled with sets of decision with the cardinality equal to i.
    Table 2 presents the depth of α-decision trees constructed by the algorithm
Uα , for α ∈ {0, 0.001, 0.01, 0.2, 0.5}.


        Table 2. Depth of α-decision trees constructed by the algorithm Uα

                     Decision                      α
                     table             0.0 0.001 0.01 0.1 0.2 0.5
                     balance-scale-1 2 2         2    1 1 1
                     breast-cancer-1 6 4         3    2 1 1
                     breast-cancer-5 3 3         2    1 1 1
                     cars-1            5 3       2    1 1 1
                     flags-5           6 4       3    1 1 1
                     hayes-roth-data-1 2 2       2    1 1 1
                     kr-vs-kp-5        13 6      4    2 2 1
                     kr-vs-kp-4        12 6      4    2 2 1
                     lymphography-5 6 5          3    2 1 1
                     mushroom-5        7 3       2    1 1 1
                     nursery-4         2 2       1    1 1 1
                     nursery-1         7 3       2    1 1 1
                     spect-test-1      7 7       4    2 2 1
                     teeth-1           4 4       3    2 1 1
                     teeth-5           3 3       3    2 1 1
                     tic-tac-toe-4     5 4       3    2 1 1
                     tic-tac-toe-3     6 4       3    2 1 1
                     zoo-data-5        4 4       3    2 2 1



    Table 3 presents the average depth of α-decision trees constructed by the
algorithm Uα , for α ∈ {0, 0.001, 0.01, 0.2, 0.5}.
    Based on results presented in Table 2 and Table 3 we can see that depth and
average depth of α-decision tree are decreasing when α is increasing, till α = 0.5.
The biggest value of depth of exact decision tree we can observe for decision table
“kr-vs-kp-5” and “kr-vs-kp-4”. However, if we consider 0.1-decision tree for such
decision tables, the depth is equal to 2.
    To make some comparative study we also present results of experiments
connected with depth and average depth of α-decision trees for approach based
on generalized decision.
    For decision tables described in Table 1 and α ∈ {0, 0.001, 0.01, 0.2, 0.5} we
constructed α-decision trees by the greedy algorithm. Table 4 presents depth
of constructed α-decision trees, Table 5 – the average depth of constructed α-
decision trees, for approach based on generalized decision.
                       Greedy Algorithm for Construction of Decision Trees         9

  Table 3. The average depth of α-decision trees constructed by the algorithm Uα

                  Decision                       α
                  table             0.0 0.001 0.01 0.1 0.2 0.5
                  balance-scale-1 2.00 2.00 1.20 1.00 1.00 1.00
                  breast-cancer-1 3.72 3.05 2.23 1.35 1.00 1.00
                  breast-cancer-5 1.84 1.84 1.37 1.00 1.00 1.00
                  cars-1            2.90 2.63 2.00 1.00 1.00 1.00
                  flags-5           3.82 2.83 1.98 1.00 1.00 1.00
                  hayes-roth-data-1 1.74 1.74 1.74 1.00 1.00 1.00
                  kr-vs-kp-5        8.24 5.16 3.68 2.00 2.00 1.00
                  kr-vs-kp-4        8.13 5.24 3.69 2.00 1.58 1.00
                  lymphography-5 3.80 3.51 2.48 1.65 1.00 1.00
                  mushroom-5        2.78 2.20 1.61 1.00 1.00 1.00
                  nursery-4         1.33 1.33 1.00 1.00 1.00 1.00
                  nursery-1         2.82 2.00 1.67 1.00 1.00 1.00
                  spect-test-1      3.35 3.35 3.05 2.00 2.00 1.00
                  teeth-1           2.82 2.82 2.27 1.41 1.00 1.00
                  teeth-5           2.21 2.21 2.21 1.36 1.00 1.00
                  tic-tac-toe-4     3.00 2.90 2.34 1.34 1.00 1.00
                  tic-tac-toe-3     4.27 3.54 2.43 1.67 1.00 1.00
                  zoo-data-5        3.21 3.21 3.00 2.00 1.52 1.00

        Table 4. Depth of α-decision trees – generalized decision approach

                    Decision                      α
                    table             0.0 0.001 0.01 0.1 0.2 0.5
                    balance-scale-1 3 3         2    1 1 1
                    breast-cancer-1 7 4         3    1 1 1
                    breast-cancer-5 4 3         2    1 1 1
                    cars-1            5 3       2    1 1 1
                    flags-5           6 4       2    1 1 1
                    hayes-roth-data-1 3 3       2    1 1 1
                    kr-vs-kp-5        14 6      4    2 2 1
                    kr-vs-kp-4        14 6      4    2 2 1
                    lymphography-5 7 5          3    2 1 1
                    mushroom-5        8 3       2    1 1 1
                    nursery-4         4 2       2    1 1 1
                    nursery-1         7 3       2    1 1 1
                    spect-test-1      10 9      4    2 2 1
                    teeth-1           4 4       3    2 1 1
                    teeth-5           3 3       3    2 1 1
                    tic-tac-toe-4     5 4       3    2 1 1
                    tic-tac-toe-3     6 4       3    2 1 1
                    zoo-data-5        7 7       4    2 2 1


   Table 6, based on results from Tables 2 and 4, presents a comparison of the
depth of α-decision trees for the proposed approach and the approach based on
10        Mohammad Azad, Igor Chikalov, Mikhail Moshkov, and Beata Zielosko

     Table 5. The average depth of α-decision trees – generalized decision approach

                    Decision                       α
                    table             0.0 0.001 0.01 0.1 0.2 0.5
                    balance-scale-1 3.00 2.76 2.00 1.00 1.00 1.00
                    breast-cancer-1 4.12 3.14 2.40 1.00 1.00 1.00
                    breast-cancer-5 2.60 2.19 1.40 1.00 1.00 1.00
                    cars-1            4.51 3.00 2.00 1.00 1.00 1.00
                    flags-5           3.91 2.83 1.88 1.00 1.00 1.00
                    hayes-roth-data-1 2.31 2.31 1.74 1.00 1.00 1.00
                    kr-vs-kp-5        9.49 5.33 3.89 2.00 2.00 1.00
                    kr-vs-kp-4        9.47 5.34 3.79 2.00 2.00 1.00
                    lymphography-5 4.36 3.57 2.44 1.65 1.00 1.00
                    mushroom-5        2.90 2.20 1.61 1.00 1.00 1.00
                    nursery-4         2.42 2.00 2.00 1.00 1.00 1.00
                    nursery-1         4.95 3.00 2.00 1.00 1.00 1.00
                    spect-test-1      4.34 4.31 3.83 2.00 2.00 1.00
                    teeth-1           2.82 2.82 2.27 1.41 1.00 1.00
                    teeth-5           2.21 2.21 2.21 1.36 1.00 1.00
                    tic-tac-toe-4     4.54 3.49 2.68 2.00 1.00 1.00
                    tic-tac-toe-3     5.28 3.69 2.60 1.70 1.00 1.00
                    zoo-data-5        4.10 4.10 3.19 2.00 2.00 1.00


                   Table 6. Comparison of depth of α-decision trees

                    Decision                       α
                    table             0.0 0.001 0.01 0.1 0.2 0.5
                    balance-scale-1 1.50 1.50 1.00 1.00 1.00 1.00
                    breast-cancer-1 1.17 1.00 1.00 0.50 1.00 1.00
                    breast-cancer-5 1.33 1.00 1.00 1.00 1.00 1.00
                    cars-1            1.00 1.00 1.00 1.00 1.00 1.00
                    flags-5           1.00 1.00 0.67 1.00 1.00 1.00
                    hayes-roth-data-1 1.50 1.50 1.00 1.00 1.00 1.00
                    kr-vs-kp-5        1.08 1.00 1.00 1.00 1.00 1.00
                    kr-vs-kp-4        1.17 1.00 1.00 1.00 1.00 1.00
                    lymphography-5 1.17 1.00 1.00 1.00 1.00 1.00
                    mushroom-5        1.14 1.00 1.00 1.00 1.00 1.00
                    nursery-4         2.00 1.00 2.00 1.00 1.00 1.00
                    nursery-1         1.00 1.00 1.00 1.00 1.00 1.00
                    spect-test-1      1.43 1.29 1.00 1.00 1.00 1.00
                    teeth-1           1.00 1.00 1.00 1.00 1.00 1.00
                    teeth-5           1.00 1.00 1.00 1.00 1.00 1.00
                    tic-tac-toe-4     1.00 1.00 1.00 1.00 1.00 1.00
                    tic-tac-toe-3     1.00 1.00 1.00 1.00 1.00 1.00
                    zoo-data-5        1.75 1.75 1.33 1.00 1.00 1.00



the generalized decision. Each input of this table is equal to the value of the
                       Greedy Algorithm for Construction of Decision Trees      11

depth of an α-decision tree for the generalized decision approach divided by the
value of the depth of an α-decision tree for the proposed approach.
    Table 7, based on results from Tables 3 and 5, presents comparison of the
average depth of α-decision trees for proposed approach and approach based on
generalized decision. Each input of this table is equal to the value of average
depth of α-decision tree for generalized decision approach divided by the value
of average depth of α-decision tree for proposed approach.
    The difference relative to depth and average depth of α-decision trees is no-
ticeable for small values of α (see Table 6 and Table 7). In case of α ∈ {0.2, 0.5}
values of depth and average depth of α-decision trees are almost the same. How-
ever, for Table 6, for decision table “nursery-4”, for α = 0.0 and α = 0.01, the
depth of α-decision tree for proposed approach is 2 times shorter than for gen-
eralized decision approach. We can also find decision table (“breast-cancer-1”
and α = 0.1) where the depth of α-decision tree for proposed approach is longer
(50%) than the depth of α-decision tree for generalized decision approach. Similar
situation according to comparison of average depth, we can observe in Table 7.


           Table 7. Comparison of the average depth of α-decision trees

                  Decision                        α
                  table             0.0 0.001 0.01 0.1 0.2 0.5
                  balance-scale-1 1.50 1.38 1.67 1.00 1.00 1.00
                  breast-cancer-1 1.11 1.03 1.08 0.74 1.00 1.00
                  breast-cancer-5 1.41 1.19 1.02 1.00 1.00 1.00
                  cars-1            1.56 1.14 1.00 1.00 1.00 1.00
                  flags-5           1.02 1.00 0.95 1.00 1.00 1.00
                  hayes-roth-data-1 1.33 1.33 1.00 1.00 1.00 1.00
                  kr-vs-kp-5        1.15 1.03 1.06 1.00 1.00 1.00
                  kr-vs-kp-4        1.16 1.02 1.03 1.00 1.27 1.00
                  lymphography-5 1.15 1.02 0.98 1.00 1.00 1.00
                  mushroom-5        1.04 1.00 1.00 1.00 1.00 1.00
                  nursery-4         1.82 1.50 2.00 1.00 1.00 1.00
                  nursery-1         1.76 1.50 1.20 1.00 1.00 1.00
                  spect-test-1      1.30 1.29 1.26 1.00 1.00 1.00
                  teeth-1           1.00 1.00 1.00 1.00 1.00 1.00
                  teeth-5           1.00 1.00 1.00 1.00 1.00 1.00
                  tic-tac-toe-4     1.51 1.20 1.15 1.49 1.00 1.00
                  tic-tac-toe-3     1.24 1.04 1.07 1.02 1.00 1.00
                  zoo-data-5        1.28 1.28 1.06 1.00 1.32 1.00




6   Conclusions
We presented the algorithm Uα , 0 ≤ α < 1, for construction of α-decision trees,
for decision tables with many-valued decisions. We showed that for an arbitrary
12      Mohammad Azad, Igor Chikalov, Mikhail Moshkov, and Beata Zielosko

natural t, the considered algorithm has polynomial time complexity on tables
which have at most t decisions in each set of decisions attached to rows. Based on
results presented in Sect. 5, we can observe that greedy algorithm Uα constructs,
for decision tables with many-valued decisions, α-decision trees with relatively
short depth.
    Based on results connected with comparison we can see that the depth and
average depth of α-decision trees constructed in the framework of our approach
(one decision from the set of decisions attached to row) is usually less than the
depth and average depth of α-decision trees constructed in the framework of
generalized decision approach (all decisions from the set of decisions attached to
row). So, if it is enough to find only one decision, our approach can be used. How-
ever, it is necessary to find simpler uncertainty measures for greedy algorithm
since the work with boundary subtables is too complicated.


References
 1. Asuncion, A., Newman, D.J.: UCI Machine Learning Repository (http://www.
    ics.uci.edu/~mlearn/, 2007)
 2. Azad, M., Chikalov, I., Moshkov, M., Zielosko, B.: Greedy algorithms for construc-
    tion of approximate tests. Fundam. Inform. (2012) (to appear).
 3. Chikalov, I., Zielosko, B.: Decision rules for decision tables with many-valued
    decisions. In Yao, J., Ramanna, S., Wang, G., Suraj, Z., eds.: RSKT 2011. Volume
    6954 of LNCS., Springer (2011) 763–768
 4. Cohen, W.W.: Learning trees and rules with set-valued features. In: AAAI’96 -
    Volume 1, AAAI Press (1996) 709–716
 5. Moshkov, M., Zielosko, B.: Combinatorial Machine Learning - A Rough Set Ap-
    proach. Springer, Heidelberg (2011)
 6. Moshkov, M., Zielosko, B.: Construction of α-decision trees for tables with many-
    valued decisions. In Yao, J., Ramanna, S., Wang, G., Suraj, Z., eds.: RSKT 2011.
    Volume 6954 of LNCS., Springer (2011) 486–494
 7. Moshkov, M., Zielosko, B.: Construction of tests for tables with many-valued
    decisions. In: 20th International Workshop Concurrency, Specification and Pro-
    gramming CS&P 2011, September 28–30, Pultusk, Poland, Bialystok University of
    Technology (2011) 376–384
 8. Orlowska, E., Pawlak, Z.: Representation of nondeterministic information. Theor.
    Comput. Sci. 29 (1984) 27–39
 9. Pawlak, Z.: Rough Sets - Theoretical Aspects of Reasoning about Data. Kluwer
    Academic Publishers, Dordrecht (1991)
10. Pawlak, Z., Skowron, A.: Rough sets and boolean reasoning. Inf. Sci. 177(1) (2007)
    41–73
11. Pawlak, Z., Skowron, A.: Rudiments of rough sets. Inf. Sci. 177(1) (2007) 3–27
12. Quinlan, J.R.: Induction of decision trees. Mach. Learn. (1986) 81–106
13. Skowron, A., Rauszer, C.: The discernibility matrices and functions in information
    systems. In: Intelligent Decision Support. Handbook of Applications and Advances
    of the Rough Set Theory. Kluwer Academic Publishers, Dordrecht (1992) 331–362
14. Yang, H., Xu, Z., Zhang, J., Cai, J.: A constructing method of decision tree and
    classification rule extraction for incomplete information system. In: CASON ’10,
    IEEE Computer Society (2010) 49–52