<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Rules from granules vs. granulated rules</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Faculty of Mathematics and Computer Science University of Warmia and Mazury in Olsztyn Poland</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>This article presents a comparison of classi cation e ects using an exhaustive set of decision-making rules and a granular set of rules. Standard approach is that we perform granulation of the chosen data set looking for the optimal granulation radius and at the end we generate new decision rules, where on the other side, our method is based on the idea of building decision rules rst and then granulating them using known methods.</p>
      </abstract>
      <kwd-group>
        <kwd>data granulation</kwd>
        <kwd>decision rules</kwd>
        <kwd>Rough Sets</kwd>
        <kwd>Decision Systems</kwd>
        <kwd>Classi cation</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        Data approximation methods are especially important in big data analysis where
often the internal knowledge is more important than the single data sample itself.
One of the most important paradigms is granular rough computing. This idea
introduces the concept of data granules in terms of rough sets theory [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. The
term granule was initially used by Lot Zadeh [27] to de ne the group of objects
grouped together in the sense of similarity relation.
      </p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] and [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] Polkowski introduced a simple yet e ective idea of data
approximation using rough inclusions. This approach, named standard granulation,
relies on creating granules of r-indiscernible objects, then covering of the
original training data is performed and nally new objects are created from granular
re ections through the use of majority voting.
      </p>
      <p>
        New techniques and their applications were developed and described in
([
        <xref ref-type="bibr" rid="ref1">1</xref>
        ][
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], Polkowski [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]{[
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], and Polkowski and Artiemjew [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]{ [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ]. For other
applications of data approximation process, classi cation and missing values absorption
- see [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ].
      </p>
      <p>There are many other granulation techniques like concept-dependent
granulation, layered granulation and recently developed homogeneous granulation
[28]. Main goal of granulation process is to reduce the amount of data but at the
same time the internal knowledge must be maintained to allow acceptable level
of classi cation accuracy.</p>
      <p>One of the simplest idea of data classi cation is by building decision rules
based on those data and then combine them with a classi er. Once again there
are many know methods of rules generation algorithms starting from
exhaustive rules, by LEM2 algorithm or sequential covering . When we consider the
fact, that the granulation is reducing the amount of data that is being processed
during classi cation, and in some cases we can further reduce this amount by
generating decision rules, we thought of an idea of rules granulation as an
extension of this approach. In the next part of this section we will present some
theoretical introduction to the classical methods we were using as a base of
comparison to our approach.</p>
      <p>The rest of the paper has the following content. In Sect. 1 we present the
theoretical introduction to granular rough computing and decision rules. In Sect.
2 we detail the description of our approach to rules granulation with a toy
example. In Sect. 3 we introduce the classi er used in experimental part. In
Sect. 4 experiment results are presented, and we conclude the paper in Sect. 5.</p>
      <p>The granulation process consists of three basic steps, the granules are formed
around the training objects, the covering of universe of training objects is chosen,
and nally granular re ection from covering granules is obtained by majority
voting procedure. As a nal step decision rules are built and a classi cation
process is being performed. We begin with the basic notions of rough inclusions
to introduce the rst step.
1.1</p>
      <p>
        Theoretical background - granular rough inclusions
The models for rough mereology which give us methods by which the rough
inclusions are de ned, are presented in Polkowski [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]{[
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]; a detailed discussion
may be found in Polkowski [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ].
      </p>
      <p>For a rough inclusion on the universe U of a decision system D = (U; A; d).
We introduce the parameter rgran, the granulation radius with values 0; j A1j ; j A2j ; :::; 1.
For each object u 2 U , and r = rgran, the standard granule g(u; r; ), of radius
r about u, is de ned as</p>
      <p>The standard rough inclusion is de ned as
g(u; r; ) is fv 2 U : (v; u; r)g:
(v; u; r) ,
jInd(u; v)j</p>
      <p>A
j j</p>
      <p>r
v 2 grcd(u) if and only if (v; u; r) and (d(u) = d(v))
for a given rough (weak) inclusion .</p>
      <p>In the next section rules granulation method is presented.
(1)
(2)
(3)
(4)
(5)
where</p>
      <p>IN D(u; v) = fa 2 A : a(u) = a(v)g;</p>
      <p>It follows that this rough inclusion extends the indiscernibility relation to a
degree of r.
1.2</p>
      <sec id="sec-1-1">
        <title>Covering of decision system</title>
        <p>
          In this step the universe of training objects should be covered by computed
granules using a selected strategy. One of the most e ective methods among the
studied ones (see [
          <xref ref-type="bibr" rid="ref24">24</xref>
          ]) is simple random choice and thus this method is selected
for our experiments. In the next section there is a description of the last step of
the granulation process.
1.3
        </p>
      </sec>
      <sec id="sec-1-2">
        <title>Granular re ections</title>
        <p>Once the granular covering is selected, the idea is to represent granules by single
objects. The strategy for obtaining it can be the majority voting M V , so for
each granule g 2 COV (U; ; r), the nal representation is formed as follows
fM V (fa(u) : u 2 gg) : a 2 A [ fdgg
where for numerical data we treat the descriptors as indiscernible in case
jmaia(xua) amji(nua)j ", i; j are the numbers of objects in granule,</p>
        <p>The granular re ection of the decision system D = (U; A; d),(where U is
the universe of objects, A the set of conditional attributes and d is decision
attribute), (COV (U; ; r)) is formed from granules.</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>Used method and toy example of decision rules granulation</title>
      <p>Approach which was used for rules granulation can be described in following
steps.</p>
      <sec id="sec-2-1">
        <title>Step 1:</title>
        <p>Exhaustive rules from given dataset are generated.</p>
      </sec>
      <sec id="sec-2-2">
        <title>Step 2:</title>
        <p>Rules with length = 1 (only one descriptor) are omitted and moved to nal set.</p>
      </sec>
      <sec id="sec-2-3">
        <title>Step 3:</title>
        <p>Rules are divided into separate sets with rules of the same length. For each set
the concept granulation is being performed, it means that only rules from the
same decision class are compared when computing the indiscernibility. New rules
are created using majority voting method and possible con icts are resolved by
random.</p>
      </sec>
      <sec id="sec-2-4">
        <title>Step 4:</title>
        <p>Newly created rules for each length and indiscernibility radius are being put
together. Support (number of occurrences) for each rule is calculated and possible
con icting rules are removed. This approach favors longer rules because of the
higher support value.</p>
        <p>
          Because of the fact that this granulation approach brings just slightly bigger
number of rules, especially for the larger datasets it is hard to present a solid
toy example. Following sample from Australian-credit dataset will show the
concepts of concept dependent rules granulation.
10 randomly selected rules from exhaustive set generated on Australian-credit.
Rules with length 1 were omitted.
(a2=34.08) (a6=4.0) ) d = 0:0[
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]
(a3 = 0:25) (a8 = 1:0) ) d = 1:0[
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]
(a5 = 9:0) (a13 = 80:0) ) d = 1:0[
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]
(a2 = 15:83) (a3 = 0:585) ) d = 1:0[
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]
(a5 = 7:0) (a6 = 4:0) (a14 = 2:0) ) d = 0:0[
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]
(a3 = 1:0) (a9 = 1:0) (a11 = 0:0) ) d = 1:0[
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]
(a7 = 0:415) (a9 = 1:0) (a13 = 0:0) ) d = 1:0[
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]
(a3 = 0:875) (a9 = 0:0) (a11 = 1:0) ) d = 1:0[
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]
(a5 = 6:0) (a7 = 3:5) (a10 = 0:0) (a12 = 2:0) ) d = 0:0[
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]
(a1 = 1:0) (a4 = 2:0) (a5 = 8:0) (a13 = 160:0) (a14 = 1:0) ) d = 1:0[
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]
        </p>
        <p>
          Let's take three rules into consideration
(a3 = 1:0) (a9 = 1:0) (a11 = 0:0) ) d = 1:0[
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]
(a7 = 0:415) (a9 = 1:0) (a13 = 0:0) ) d = 1:0[
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]
(a3 = 0:875) (a9 = 0:0) (a11 = 1:0) ) d = 1:0[
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]
These are all rules with length=3 and decision=1 so those rules are granuled
as single set. As we can see two of them have the same attribute numbers with
slightly di erent values. When we run a majority voting on those two rules a
new rule will be built because the dominant value is the same for each attribute
and random choice will be used. At this run a new rule
(a3 = 1:0) (a9 = 0:0) (a11 = 0:0) ) d = 1:0[
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]
was built, but this process is not deterministic.
        </p>
        <p>
          Final set of granuled rules looks as follows.
(a2 = 34:08) (a6 = 4:0) ) d = 0:0[
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]
(a3 = 0:25) (a8 = 1:0) ) d = 1:0[
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]
(a5 = 9:0) (a13 = 80:0) ) d = 1:0[
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]
(a2 = 15:83) (a3 = 0:585) ) d = 1:0[
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]
(a5 = 7:0) (a6 = 4:0)(a14 = 2:0) ) d = 0:0[
          <xref ref-type="bibr" rid="ref4">4</xref>
          ]
(a3 = 1:0) (a9 = 0:0)(a11 = 0:0) ) d = 1:0[
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]
(a7 = 0:415) (a9 = 1:0)(a13 = 0:0) ) d = 1:0[
          <xref ref-type="bibr" rid="ref4">4</xref>
          ]
(a3 = 0:875) (a9 = 0:0)(a11 = 1:0) ) d = 1:0[
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]
(a3 = 1:0) (a9 = 1:0)(a11 = 0:0) ) d = 1:0[
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]
(a5 = 6:0) (a7 = 3:5)(a10 = 0:0)(a12 = 2:0) ) d = 0:0[
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]
(a1 = 1:0) (a4 = 2:0) (a5 = 8:0) (a13 = 160:0) (a14 = 1:0) ) d = 1:0[
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]
3
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Classi cation process</title>
      <p>We have designed rule based classi er, which consists of tting the set of decision
rules into classi ed objects and vote for decision. The ties are resolved randomly.
We have measured the e ectiveness using global accuracy parameter, which can
be de ned as the percentage of correctly classi ed objects. We use exhaustive set
of rules in our experiments, with minimal descriptors length. We are generating
no con icting decision rules starting from the ones with length equal to one.
This algorithm nishes his work with the rule length, in which there are no more
candidates or minor number of rules, in comparison with the whole computed
set, is generated.</p>
    </sec>
    <sec id="sec-4">
      <title>Results</title>
      <p>We have carried out experiments on exemplary real data from UCI Repository.
Used dataset description is presented in Table 1 while results of granulation and
classi cation in Table 2 and Table 3 respectively.</p>
      <p>We have used our own implementation of exhaustive algorithm, basic method
for results evaluation is Cross Validation 5 technique.</p>
    </sec>
    <sec id="sec-5">
      <title>Conclusions</title>
      <p>In this work we have considered the technique of exhaustive set of rules
granulation. We have compared the set of decision rules after their granulation vs
rules computed from granulated decision systems. It was proven that its better
to granulate and compute rules than compute rules and granulate them. In the
latter case we have lose of the information. It is di cult to merge rules after
their granulation. The granulation process of exhaustive decision rules seems
to be ine ective, because the rules are designed in MDL (minimal description
length) model, and they are not redundant. Granulation process works good in
case there are many indiscernible values in the granulated entity.
6</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgements</title>
      <p>The research has been supported by grant 23:610:007-300 from Ministry of
Science and Higher Education of the Republic of Poland.
http://ocw.mit.edu/courses/health-sciences-and-technology/hst-951j-medicaldecision-support-fall-2005/lecture-notes/hst951 6.pdf HST.951J: Medical Decision
Support, Fall (2005)
26. University of California, Irvine Machine Learning Repository:
https://archive.ics.uci.edu/ml/index.php
27. Zadeh, L. A.: Fuzzy sets and information granularity. In Gupta, M., Ragade, R.,
Yager, R.R. (eds.): Advances in Fuzzy Set Theory and Applications. North{Holland,
Amsterdam, 1979, pp 3{18 (1979)
28. Ropiak, K., Artiemjew, P.: A Study in Granular Computing: homogenous
granulation. In: Dregvaite G., Damasevicius R. (eds) Information and Software
Technologies. ICIST 2018. Communications in Computer and Information Science, Springer
(2018)</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Artiemjew</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Classi ers from Granulated Data Sets: Concept Dependent and Layered Granulation</article-title>
          .
          <source>In Proceedings RSKD'07. The Workshops at ECML/PKDD'07</source>
          , Warsaw Univ. Press, Warsaw,
          <year>2007</year>
          , pp
          <issue>1{9</issue>
          (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Artiemjew</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Natural versus granular computing: Classi ers from granular structures</article-title>
          .
          <source>In Proceedings of 6th International Conference on Rough Sets and Current Trends in Computing RSCTC'08</source>
          ,
          <string-name>
            <surname>Akron</surname>
            <given-names>OH</given-names>
          </string-name>
          , USA, (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Artiemjew</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          (
          <year>2013</year>
          )
          <article-title>: A Review of the Knowledge Granulation Methods: Discrete vs. Continuous Algorithms</article-title>
          . In Skowron A., Suraj
          <string-name>
            <surname>Z</surname>
          </string-name>
          . (eds.)(
          <year>2013</year>
          )
          <article-title>: Rough Sets and Intelligent Systems</article-title>
          .
          <source>ISRL 43</source>
          , Springer-Verlag, Berlin,
          <year>2013</year>
          , pp
          <volume>41</volume>
          {
          <fpage>59</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Pawlak</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          :
          <article-title>Rough sets</article-title>
          .
          <source>International Journal of Computer and Information Sciences 11</source>
          , pp
          <volume>341</volume>
          {
          <issue>356</issue>
          (
          <year>1982</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Polap</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wozniak</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wei</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Damasevicius</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <article-title>: Multi-threaded learning control mechanism for neural networks</article-title>
          .
          <source>Future Generation Computer Systems</source>
          ,
          <year>Elsevier 2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Polkowski</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>Rough Sets</article-title>
          .
          <source>Mathematical Foundations</source>
          . Physica Verlag, Heidelberg (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Polkowski</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>A rough set paradigm for unifying rough set theory and fuzzy set theory</article-title>
          .
          <source>Fundamenta Informaticae 54</source>
          , pp
          <volume>67</volume>
          {
          <article-title>88; and</article-title>
          : In Proceedings RSFDGrC03, Chongqing, China,
          <source>2003. Lecture Notes in Arti cial Intelligencevol. 2639</source>
          , Springer Verlag, Berlin, pp
          <volume>70</volume>
          {
          <issue>78</issue>
          (
          <year>2003</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Polkowski</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>Toward rough set foundations. Mereological approach</article-title>
          .
          <source>In Proceedings RSCTC04, Uppsala, Sweden. Lecture Notes in Arti cial Intelligence</source>
          vol.
          <volume>3066</volume>
          , Springer Verlag, Berlin, pp
          <volume>8</volume>
          {
          <issue>25</issue>
          (
          <year>2004</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Polkowski</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>Granulation of knowledge in decision systems: The approach based on rough inclusions</article-title>
          .
          <source>The method and its applications In Proceedings RSEISP'07, Lecture Notes in Arti cial Intelligence</source>
          vol.
          <volume>4585</volume>
          . Springer Verlag, Berlin, pp
          <fpage>69</fpage>
          <lpage>{</lpage>
          (
          <year>2004</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Polkowski</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>Formal granular calculi based on rough inclusions</article-title>
          .
          <source>In Proceedings of IEEE 2005 Conference on Granular Computing GrC05</source>
          , Beijing, China. IEEE Press, pp
          <volume>57</volume>
          {
          <issue>62</issue>
          (
          <year>2005</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Polkowski</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>A model of granular computing with applications</article-title>
          .
          <source>In Proceedings of IEEE 2006 Conference on Granular Computing GrC06</source>
          , Atlanta, USA. IEEE Press, pp
          <volume>9</volume>
          {
          <issue>16</issue>
          (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Polkowski</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>The paradigm of granular rough computing</article-title>
          .
          <source>In Proceedings ICCI'07</source>
          ,
          <string-name>
            <surname>Lake</surname>
            <given-names>Tahoe NV</given-names>
          </string-name>
          . IEEE Computer Society, Los Alamitos CA, pp
          <volume>145</volume>
          {
          <issue>163</issue>
          (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Polkowski</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>A Uni ed Approach to Granulation of Knowledge and Granular Computing Based on Rough Mereology: A Survey, in: Handbook of Granular Computing, Witold Pedrycz</article-title>
          , Andrzej Skowron, Vladik Kreinovich (Eds.), John Wiley &amp; Sons, New York,
          <fpage>375</fpage>
          -
          <lpage>401</lpage>
          (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Polkowski</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>Granulation of Knowledge: Similarity Based Approach in Information and Decision Systems</article-title>
          . In Meyers, R. A.(ed.):
          <source>Encyclopedia of Complexity and System Sciences</source>
          . Springer Verlag, Berlin, article
          <volume>00788</volume>
          (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Polkowski</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>Approximate Reasoning by Parts</article-title>
          . An Introduction to Rough Mereology. Springer Verlag, Berlin, (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Polkowski</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Artiemjew</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>On granular rough computing with missing values</article-title>
          .
          <source>In Proceedings RSEISP'07, Lecture Notes in Arti cial Intelligence</source>
          vol.
          <volume>4585</volume>
          . Springer Verlag, Berlin,
          <year>2007</year>
          , pp
          <volume>271</volume>
          {
          <issue>279</issue>
          (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Polkowski</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Artiemjew</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>On granular rough computing: Factoring classi ers through granular structures</article-title>
          .
          <source>In Proceedings RSEISP 2007, Lecture Notes in Arti - cial Intelligence</source>
          vol.
          <volume>4585</volume>
          . Springer Verlag, Berlin, pp
          <volume>280</volume>
          {
          <issue>290</issue>
          (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Polkowski</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Artiemjew</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          : Towards Granular Computing:
          <article-title>Classi ers Induced from Granular Structures</article-title>
          .
          <source>In Proceedings RSKD'07. The Workshops at ECML/PKDD'07</source>
          , Warsaw Univ. Press, Warsaw, pp
          <volume>43</volume>
          {
          <issue>53</issue>
          (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Polkowski</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Artiemjew</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Classi ers based on granular structures from rough inclusions</article-title>
          .
          <source>In Proceedings of 12th Int. Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems IPMU'08</source>
          ,
          <string-name>
            <surname>Torremolinos</surname>
          </string-name>
          (Malaga), Spain, pp
          <volume>1786</volume>
          {
          <issue>1794</issue>
          (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Polkowski</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Artiemjew</surname>
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Rough sets in data analysis: fundations and applications</article-title>
          . In Smolinski, T. G.,
          <string-name>
            <surname>Milanova</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hassanien</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          {E. (eds.):
          <article-title>Applications of Computional Intelligence in Biology: Current Trends and open Problems</article-title>
          , SCI, vol.
          <volume>122</volume>
          . Springer Verlag, Berlin,
          <year>2008</year>
          , pp
          <volume>33</volume>
          {
          <fpage>54</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Polkowski</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Artiemjew</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Rough mereology in classi cation of data: Voting by means of residual rough inclusions</article-title>
          .
          <source>In Proceedings of 6th International Conference on Rough Sets and Current Trends in Computing RSCTC'08</source>
          ,
          <string-name>
            <surname>Akron</surname>
            <given-names>OH</given-names>
          </string-name>
          ,
          <source>USA. Lecture Notes in Arti cial Intelligence</source>
          vol.
          <volume>5306</volume>
          , Berlin, pp
          <volume>113</volume>
          {
          <issue>120</issue>
          (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Polkowski</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Artiemjew</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>A study in granular computing: On classi ers induced from granular re ections of data</article-title>
          .
          <source>Transactions on Rough Sets IX. Lecture Notes in Computer Science</source>
          vol.
          <volume>5390</volume>
          . Springer, Berlin, pp
          <volume>230</volume>
          {
          <issue>263</issue>
          (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Polkowski</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Artiemjew</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>On classifying mappings induced by granular structures</article-title>
          .
          <source>Transactions on Rough Sets IX. Lecture Notes in Computer Science</source>
          vol.
          <volume>5390</volume>
          . Springer, Berlin, pp
          <volume>264</volume>
          {
          <issue>286</issue>
          (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <surname>Polkowski</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Artiemjew</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Granular Computing in Decision Approximation - An Application of Rough Mereology</article-title>
          ,
          <source>in: Intelligent Systems Reference Library 77</source>
          , Springer,
          <source>ISBN 978-3-319-12879-5</source>
          ,
          <fpage>1</fpage>
          -
          <lpage>422</lpage>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <surname>Ohno-Machado</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>Cross-validation and</article-title>
          <string-name>
            <surname>Bootstrap Ensembles</surname>
          </string-name>
          , Bagging, Boosting, Harvard-MIT
          <source>Division of Health Sciences and Technology</source>
          ,
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>