<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Betweenness, Lukasiewicz Rough Inclusions, Euclidean Representations in Information Systems, Hyper-granules, Conflict Resolution</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Lech T. Polkowski</string-name>
          <email>polkow@pjwstk.edu.pl</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>1. I1. μ(x</institution>
          ,
          <addr-line>y, 1) if and only if ingr(x, y). 2. I2. μ(x, y, 1) and μ(z, x, r) imply μ(z, y, r). 3. I3. μ(x, y, r) and s &lt; r imply μ(x, y, s)</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Mathematics and Computer Science University of Warmia and Mazury Olsztyn, Poland and Polish-Japanese Institute of IT.</institution>
          <addr-line>Warszawa</addr-line>
          ,
          <country country="PL">Poland</country>
        </aff>
      </contrib-group>
      <fpage>97</fpage>
      <lpage>110</lpage>
      <abstract>
        <p>The purpose of this note is to augment information systems with a relation of betweenness among things in the universe of the system and derive from it a geometric representation of granules of things in Euclidean spaces. Next, the notion of betweenness renders a service in introducing a new notion of a hypergranule. An application to conflict resolution by defining coalitions of agents as granules or hyper-granules and their mixed strategies as elements of convex hulls spanned on things defining the granule is proposed. Finally, hyper-granules offer a new classifying algorithm which exploits neighborhoods of things and in a sense is an improved with respect to similarity variant of nearest neighbor classifier.</p>
      </abstract>
      <kwd-group>
        <kwd>rough inclusions</kwd>
        <kwd>betweenness</kwd>
        <kwd>Euclidean representations of granules</kwd>
        <kwd>hyper-granules</kwd>
        <kwd>coalitions</kwd>
        <kwd>conflict resolutions</kwd>
        <kwd>classifier synthesis</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>We would like to recall here basic facts about notions mentioned in Abstract, to make
this note self–contained. We begin with the Lukasiewicz rough inclusion.
1.1</p>
      <p>
        The Lukasiewicz Rough Inclusion
Given a set of things U , a rough inclusion is a ternary relation μ ⊆ U × U × [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ] which
renders the notion of ‘to be a part to a degree’. It is rooted in mereology, see, e.g., [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]
whose basic notion is that of a part which is a transitive and irreflexive relation π on
the product U × U along with its reflexive closure, the ingredient ingr, i.e., ingr =
part ∪ ‘ =′. The rough inclusion μ on the set U × U should satisfy the requirements
      </p>
    </sec>
    <sec id="sec-2">
      <title>A rough inclusion μ induces a mereological distance dμ by means of the formula</title>
      <p>dmu(x, y) = min{sup{r : μ(x, y, r)}, sup{s : μ(y, x, s)}}.
(1)</p>
    </sec>
    <sec id="sec-3">
      <title>Assuming that suprema in the formula (1) are achieved, we have</title>
      <p>Proposition 1. dμ(x, y) = 1 if and only if x = y.</p>
      <p>
        In the case when things in the universe U are described by means of a finite set A of
attributes so each thing x ∈ U is represented by its information set {a(x) : a ∈ A},
we can define the Lukasiewicz rough inclusion μL by taking as its value on a pair x, y
in U the quotient of the cardinality of the indiscernibility set IN D(x, y) = {a ∈ A :
a(x) = a(y)} and the cardinality of the set A:
(2)
(3)
cf., [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], Ch.6.
1.2
      </p>
      <p>
        Betweenness
μL(x, y) =
card(IN D(x, y))
card(A)
,
The notion of betweenness plays an essential role in axiomatization of elementary
geometry of Euclidean spaces due to Tarski, see Tarski and Givant [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], it is formalized
as a relation B(x, y, z) (‘y is between x and z’); intuitively, B(x, y, z) means that y lies
on the straight line segment with endpoints x, z.
      </p>
      <p>
        Van Benthem [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] proposed an extension of the betweenness relation based on
the relation of nearness N (x, y, z) (‘x is closer to y than z’) which in terms of the
distance dμ would be defined by means of
      </p>
      <p>N (x, y, z) if and only if dμ(x, y) &gt; dμ(z, y).</p>
      <p>
        The relation N thus defined, does satisfy all axioms for nearness in Van Benthem
[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], i.e.,
1. N1. N (z, u, v) and N (v, u, w) imply N (z, u, w) (transitivity).
      </p>
    </sec>
    <sec id="sec-4">
      <title>2. N2. N (z, u, v) and N (u, v, z) imply N (u, z, v) (triangle inequality).</title>
    </sec>
    <sec id="sec-5">
      <title>3. N3. N (z, u, z) is not true for each pair u, z (irreflexivity). 4. N4. z = u or N (z, z, u) (selfishness). 5. N5. N (z, u, v) implies N (z, u, w) or N (w, u, v) (connectedness). For the proof, cf., [3] Ch. 6.</title>
      <p>
        Betweenness relation in the sense of Van Benthem TB(z, u, v) (‘u is between z
and v’) introduced in Van Benthem [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] is rendered by a formula
      </p>
      <p>TB(z, u, v) ⇔ [for each w ∈ U (u = w) or N (u, z, w) or N (u, v, w)].
(4)
This means that for each thing w distinct from z, either u is closer to z than is w or u is
closer to v than is w.
2</p>
      <p>Granules from Indiscernible Things
We assume that things in the set U are described by means of values of N real valued
attributes in the set A; from this point of view the set U becomes the set R(U ) ⊆ RN of
vectors in RN . As said above, each thing x ∈ U is represented by the vector [ai(x)]iN=1
where &lt; a1, a2, . . . , aN &gt; is a fixed ordering of attributes.</p>
    </sec>
    <sec id="sec-6">
      <title>We consider a pair a, b of things in U such that a, b have in common attribute values in a set Δ ⊂ A, where Δ is non–empty; formally, this means that IN D(a, b) = Δ. Let</title>
      <p>card(Δ) = δ · N.</p>
      <p>Example 1. We consider things a, b and a thing c which agrees with a and b on the set Δ
and has 21 · (N − δ · N ) attribute values in common with a and 12 · (N − δ · N ) attribute
values in common with b; we assume for simplicity that it is possible otherwise we
should consider ⌊ 21 · (N − δ · N )⌋ and ⌈ 12 · (N − δ · N )⌉, respectively which would only
make calculations more cumbersome. Please observe that we do not specify positions
of values, i.e., we do not specify particular attributes on which these values are taken,
with exception for Δ only. Therefore, it makes sense to identify all such things into a
class [c] in which all things have values on attributes in Δ same as a and b and share
with each of a, b one half of the remaining values.</p>
    </sec>
    <sec id="sec-7">
      <title>Our Thesis is</title>
      <sec id="sec-7-1">
        <title>Proposition 2. c is between a and b.</title>
        <p>Proof. We find the distance dμL between a, c and b, c with respect to the rough inclusion
μL; by definition (1),
dμL (a, c) =
δ · N + 21 · (N − δ · N )</p>
        <p>N
=
1 + δ
2
= dμL (c, b).</p>
        <p>
          (6)
We now consider an arbitrary thing x which for some quotient α in [
          <xref ref-type="bibr" rid="ref1">0, 1</xref>
          ] has α · δ · N
values of attributes in Δ in common with a and for some quotient β ∈ [
          <xref ref-type="bibr" rid="ref1">0, 1</xref>
          ] has β ·(N −
δ · N ) values of attributes not in Δ in common with a and at most (1 − β) · (N − δ · N )
values in common with b. We have
dμL (x, a) =
α · δ · N + β · (N − δ · n)
        </p>
        <p>N
= β + (α − β) · δ,</p>
        <p>(7)
= 1 − β + (α + β − 1) · δ.</p>
        <p>(8)
and
dμL (x, b) ≤
α · δ · N + (1 − β) · (N − δ · n)</p>
        <p>N
Let us assume, to the contrary, that (1) dμL (x, a) &gt; dμL (c, a) and (2) dμL (x, b) &gt;
dμL (c, b).</p>
        <p>Condition (1) means after substitution of values in (6) and (7) that
β + (α − β) · δ &gt;
(5)
(9)
i.e.,
1 δ
α · δ + β · (1 − δ) &gt; + .</p>
        <p>2 2</p>
      </sec>
    </sec>
    <sec id="sec-8">
      <title>Similarly, condition (2) yields after values in (6) and (8) are substituted into it, Adding inequalities (10) and (11) yields and, as δ &gt; 0,</title>
      <p>α · δ − β · (1 − δ) &gt;
· δ −
3
2</p>
    </sec>
    <sec id="sec-9">
      <title>This example has served as a motivation for further generalizations.</title>
      <p>
        We have already observed that we have not specified the attributes selected and
actually we have discussed classes of equivalence of things, two things x, y being
equivalent if and only if they have had same fractions of attribute values for attributes
in Δ and same fraction of attribute values not in Δ. Hence, for fractions γ of values
in Δ and ε of values in A \ Δ common with a, and, at most 1 − ε values of attributes
in A \ Δ in common with b, we denote with the symbol [γ, ε] the class of things
satisfying those conditions. We regard, hence, the vector [γ, ε] as the representation of
that class in the vector space R2, the representation space. In particular, the thing a is
represented as [
        <xref ref-type="bibr" rid="ref1 ref1">1, 1</xref>
        ], the thing b is represented as [
        <xref ref-type="bibr" rid="ref1">1, 0</xref>
        ], and, the thing c is in the class
[
        <xref ref-type="bibr" rid="ref1">1, 21</xref>
        ]. Thus, in the Euclidean plane, c is the midpoint in the segment with endpoints b
and c, i.e. c is between a and b in the elementary geometry sense.
      </p>
    </sec>
    <sec id="sec-10">
      <title>The proof above does suggest a more general result.</title>
      <p>
        Proposition 3. For α ∈ [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ], the class [1, α] is between classes [
        <xref ref-type="bibr" rid="ref1">1, 0</xref>
        ] of b and [
        <xref ref-type="bibr" rid="ref1 ref1">1, 1</xref>
        ] of
a in the representation space.
      </p>
      <p>Proof. This proof goes on similar lines as proof of the previous proposition. Let d
denotes the class [1, α] and let x be in the class [γ, ε]. Assuming to the contrary that
(1)dμL (x, a) &gt; dμL (d, a) and (2) dmuL (x, b) &gt; dmuL (d, b), we obtain inequalities
and,</p>
    </sec>
    <sec id="sec-11">
      <title>Sidewise addition of (14) and (15) yields the inequality</title>
      <p>γ · δ + ε · (1 − δ) &gt; δ + α · (1 − δ),
γ · δ + (1 − ε) · (1 − δ) &gt; δ + (1 − α) · (1 − δ).</p>
      <p>2 · γ · δ &gt; 2 · δ,
i.e., γ &gt; 1, impossible. The proposition is proved.</p>
      <p>
        It turns out that the whole interval from [
        <xref ref-type="bibr" rid="ref1">1, 0</xref>
        ] to [
        <xref ref-type="bibr" rid="ref1 ref1">1, 1</xref>
        ] consists of classes between
[
        <xref ref-type="bibr" rid="ref1">1, 0</xref>
        ] and [
        <xref ref-type="bibr" rid="ref1 ref1">1, 1</xref>
        ]. In the representation space of classes [γ, ε], betweennness in the
mereological sense coincides with betweenness in the geometric sense of the Euclidean
geometry of the plane.
(10)
(11)
(12)
(13)
(14)
(15)
(16)
      </p>
      <p>
        The Case Δ = ∅
In this case δ = 0 and our proofs above are not valid. In this case, given things a, b in
U , with IN D(a, b) = ∅, for a choice of γ ∈ [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ], we form things which have γ · N
attribute values in common with a and (1 − γ) · N attribute values in common with b.
We represent this class of things as before with the vector [γ, 1 − γ] in the Euclidean
plane. In this representation, a is represented as [
        <xref ref-type="bibr" rid="ref1">1, 0</xref>
        ] and b is represented as [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ], so
[γ, 1 − γ] is a convex combination of [
        <xref ref-type="bibr" rid="ref1">1, 0</xref>
        ] and [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ].
      </p>
      <p>
        Proposition 4. For each choice of γ ∈ [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ], the class of things represented as [γ, 1−γ]
is between a and b in the sense of betweenness relation B.
      </p>
      <p>
        Proof. For a point [ε, δ] with ε, δ ∈ [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ] and ε + δ ≤ 1, if, e.g., ε &gt; γ then δ &lt; 1 − γ.
It follows that things and their classes which are between a and b are located in the
representation space in the segment with endpoints [
        <xref ref-type="bibr" rid="ref1">1, 0</xref>
        ] for a and [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ] for b, i.e.,
between these endpoints in the geometrical sense.
      </p>
    </sec>
    <sec id="sec-12">
      <title>This suggests a generalization. We define a more general betweenness relation</title>
      <p>GB(x, a1, a2, . . . , an)
(‘x is between a1, a2, . . . , an’) if and only if for each thing y 6= x he thing x is closer
than y to some ai in the mereological sense of (3).
2.2</p>
      <p>
        The General Case
We consider a set V = {a1, a2, . . . , an} of things in U . For a choice of γ1, γ2, . . . ,
γn ∈ [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ] with Pi γi = 1, which we summarily denote by the vector γ, we denote as
(V, γ) the class of things which have the fraction γi of attribute values in common with
the thing ai. As above, the fact is true that
Proposition 5. The class (V, γ) of things represented by the vector γ = [γ1, γ2,
. . . , γn] satisfies the relation GB((V, γ), a1, a2, . . . , an).
      </p>
    </sec>
    <sec id="sec-13">
      <title>Proof of this proposition goes on lines of the preceding proof.</title>
      <p>Proposition 6. The relation GB(x, a1, a2, . . . , an) holds for a class x=
({a1, ..., an}, [γ1, γ2, . . . , γn])
(17)
if and only if [γ1, γ2, . . . , γn] belongs in the convex hull of vectors [1, 0, 0..., 0],
[0, 1, 0, 0, ..., 0], ..., [0, 0, ..., 0, 1] representing, in this order, classes a1, a2, ..., an.
3</p>
      <p>A Geometric Representation of Granulation
In the general case, the process of forming of a class x=(V, γ) represented by the vector
γ = [γ1, γ2, ..., γn] can be regarded as forming of a granule of things gr(V, γ). This
granulation process is different of previously considered in that it does involve a sort of
Y
j≤n</p>
      <p>N − Pk&lt;j γk · N
γj · N</p>
      <p>.
φ(N, 1) = 1,</p>
      <p>N
φ(N, k + 1) = X φ(N − j, k).</p>
      <p>
        j=0
a shuffling map, which takes into x things having specified fractions of attribute values
in common with the corresponding classes ai but over arbitrary sets of attributes of
cardinality γi · N . This secures a kind of control over values in contradiction to our
previous granulation paradigm in which only a fixed part of a given thing attribute
values was coming from the granule center, cf., [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], Chs. 5–7.
      </p>
      <p>For a given vector γ=[γ1, γ2, . . . , γn] in the representation space, representing
the granule gr(V, γ), the size of the granule gr(V, γ) is
The number of granules of type represented by a vector γ = [γ1, γ2, . . . , γn] is the
number of sequences k1, k2, . . . , kn of natural numbers such that Pi ki = N , i.e.,
it equals the number of integer–valued vectors on the hyperplane Pi xi = N , given
recurrently by the function φ(N, n):
(18)
(19)
(20)</p>
    </sec>
    <sec id="sec-14">
      <title>It follows that φ(N, n) is of order Θ(N n−1).</title>
    </sec>
    <sec id="sec-15">
      <title>Let us observe that those estimates concern all objects generated by the process</title>
      <p>described above; in reality, only a small fraction of those objects will exist in the set U .</p>
    </sec>
    <sec id="sec-16">
      <title>Hence, we define a U–granule gU (γ) as g(γ) ∩ U .</title>
      <p>3.1</p>
      <p>Granular Classifiers
Assume that a decision partition is imposed on the things in the set U into decision
classes D1, D2, . . . , Dm. For a U–granule g = gU (γ), we denote with P r[g, Di]
the probability that a randomly chosen thing in g is assigned to the class Di. Then
the Bayesian decision on g is the class D(g) = Di∗ such that P r[g, Di∗ ] =
max{P r[g, Di] : i = 1, 2, . . . , m}.</p>
    </sec>
    <sec id="sec-17">
      <title>The mapping g → D(g) from U –granules into their Bayesian decisions is the Bayesian granular classifier. It is deterministic as the covering into granules is finer than the partition into decision classes.</title>
      <p>4</p>
      <p>
        An Application Proposal: Conflict Resolution
We propose to apply the afore described results to the problem of conflict resolution.
In a conflict, we have a finite number of agents declaring their standpoints on a
number of issues; those standpoints are conflicting, i.e., on each issue there are agents
having distinct standpoints. A resolution of a conflict means some process leading to
a rationally chosen set of non–conflicting standpoints for all agents. Examples can be
saddle points in two–person zero–sum games, either for pure or mixed strategies, or,
the Nash equilibrium points in continuous convex–concave games. In each of these
cases, agents reach a set of strategies which is satisfying for each of them.
Such a rationale is difficult to be obtained in less formalized conflicts, and, our
example deals with such a conflict. We base our approach on the example of a
conflict in Pawlak [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], for which that author gave an analysis in terms of the rough
set–theoretical approach.
      </p>
      <p>
        Example 2. The conflict described in [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] does involve six agents: A1, A2,..., A6 and
five issues I1, I2, ..., I5 with the standpoints 0 (disagreement), 1 (agreement) and N (the
neutral standpoint equivalent to ‘do not care’). Hence, when standpoints N and 0 or 1
are confronted, N means 0 or 1, respectively.
      </p>
    </sec>
    <sec id="sec-18">
      <title>This case is visualized in Table Fig.1 below. agent I1 I2 I3 I4 I5 A1 A2</title>
      <p>A3
A4
A5
A6
0 1 1 1 1
1 N 0 0 0
1 0 0 0 N
N 0 0 N 0
1 0 0 0 0</p>
      <p>N 1 0 N 1</p>
      <p>In this case, the set of agents V = {Ai1 , Ai2 , . . . , Aik } forming a granule is called a
coalition, and standpoints of them on issues are their pure strategies. Given coefficients
γ1, ..., γk summing up to 1, sequences of issues in the granule (V, [γ1, ..., γk]) are mixed
strategies of agents. Given a partition of the set of agents into coalitions C1, C2, ..., Cm,
we call a conflict resolution a set i1, i2, ..., im} of mixed strategies issued from granules
generated by C1, C2, .., Cm which are consistent, i.e., conflict–less.</p>
    </sec>
    <sec id="sec-19">
      <title>We use the Lukasiewicz rough inclusion as the measure of similarity of agents, and, we include in one coalition agents for which pairwise similarity measures are greater than their similarity degrees to other agents.</title>
    </sec>
    <sec id="sec-20">
      <title>From Fig. 1, it follows that agents A1 and A6 are similar to degree of 0.8, agents A2, A3, A4, A5 are similar to each other to degree of 1.0, and, agent A1 is similar to A2, A3, A4, A5 to degree not greater than 0.4. Only A6 is similar to A2, A3, A4 to degree 0.8, and, to A5 to degree 0.6.</title>
      <p>We decide to consider coalitions V1 = {A1, A6} and V2 = {A2, A3, A4, A5}.
Forming all possible granules over V1 and V2 yields possible mixed strategies of both
coalitions. Due to their number, we list in Figs.2,3 a selection of six of those strategies
for each of the two coalitions.
We find exemplary boldfaced mixed strategies N1ON1 for the coalition V 1 and
NN0NN for the coalition V 2 which are identical for both coalitions: each of them can
provide a conflict resolution; it is true that this requires a ‘gentleman’s agreement’ to
accept the results of this procedure; for instance, each of these solutions would require
that A1 gives up on its standpoint on issue I1, which in real practice can be impossible
due, e.g., to political and religious reasons. The specific bargained for standpoints can
be: 01001, 11001, 01011, 11011.
The approach presented above begins with a group G of things and ends with the
notion of collection of things between G. Among those things are virtual ones not
present in the decision/information system. Therefore, we now propose the analysis
from the viewpoint of things in the information system.</p>
    </sec>
    <sec id="sec-21">
      <title>We consider a maximal set of things X with the property</title>
      <p>∀x ∈ X .∃Y ⊆ X \ {x}.Btw(x, Y ).
(21)</p>
      <sec id="sec-21-1">
        <title>We call X a hyper–granule.</title>
      </sec>
    </sec>
    <sec id="sec-22">
      <title>Lemma 1. For each x ∈ X , it is true that Btw(x, X \ {x}.</title>
      <p>Lemma 2. For each y ∈/ X , it is true that Btw(y, Y ) holds for no Y ⊆ X . Moreover,
there exists an attribute a such that the value a(y) 6= a(x) for each x ∈ X . Contrarily,
for each x ∈ X , and, for each attribute a, it is true that a(x) ∈ {a(y) : y ∈ X \ {x}.</p>
    </sec>
    <sec id="sec-23">
      <title>The hyper–granule X is attribute–self–contained in the sense of Lemma 2.</title>
      <p>For the hyper–granule X , we consider the complementary set of objects U \ X ;
let X ′ be a hyper–granule of objects in U \ X . Iterating this procedure, we define in the
universe set U a set of hyper–granules {X (i) : i ∈ I}, where X (j+1) is a hyper–granule
in the set U \ Si≤j X (i). The remnant of U consist of outliers unable to enter any
hyper–granule.</p>
      <p>We denote with the symbol H(x) the hyper–granule containing the thing x. For
a thing x, we consider sets of thing in H(x) \ {x} which we denote with the generic
symbol N (x) with the property that Btw(x, N (x)) and all coordinates of the vector
representing x with respect to N (x) are positive; we will call any such set a
neighborhood of x; in the example in Fig.4, for instance, N (4) = {8, 10}.</p>
      <p>A neighborhood N (x) of x is irreducible if it is of minimal possible cardinality;
such is N (4) pointed to above..
6</p>
      <p>Applications: Hyper–Granules as Coalitions in Conflicts and a
Decision Prediction Modal Logic</p>
    </sec>
    <sec id="sec-24">
      <title>We will discus first the problem of conflict resolution.</title>
      <p>6.1</p>
      <p>Hyper–Granules as Coalitions in Conflicts
We return to Fig. 1, showing standpoints of agents A1–A6 on issues I1–I5. As before,
we regard the standpoint N as ‘don’t care’ a fortiori N can be 0 or 1. We observe that
A1 cannot be considered as a candidate to the hyper–granule of A2–A6 as the issue I1
on A1 takes value 0 not taken by any of A2–A6.</p>
      <p>On the other hand, due to our convention about N , agents A2–A6 make a hyper–
granule X . This hyper–granule can produce a between element N N 0N N representing
sixteen specific bargaining propositions, from 00000 to 11011. The agent A1 remains
as an outlier due to value 0 on I1 which cannot be supplied by any other agent.
The bargaining between X and A1 can focus on I3 on which A1 takes value of</p>
    </sec>
    <sec id="sec-25">
      <title>1 and all other agents adopt the standpoint 0.</title>
      <p>6.2</p>
      <p>A Decision Prediction Logic and a Decision Assignment Algorithm
We will regard the universe U of the informaton system as the training set on which
the decision is given, and we consider the additional test set on which decision is to be
learned on the basis of its values on U .</p>
    </sec>
    <sec id="sec-26">
      <title>Given a test thing x about which we assume that there exist neighborhoods of it in the set U , and for a formula φ, we declare that x |= φ ⇔ y |= φ for each irreducible N (x)and each y ∈ N (x) .</title>
    </sec>
    <sec id="sec-27">
      <title>Modal operators L of necessity and M of possibility are introduced as follows. x |= Lφ ⇔ y |= φ for each N (x) and each y ∈ N (x) . (22) (23)</title>
    </sec>
    <sec id="sec-28">
      <title>It follows that x |= M φ ⇔ y |= φ for some N (x) and some y ∈ N (x) .</title>
      <p>L = ¬M ¬.
x |= L(φ ⇒ ψ) ⇒ (x |= Lφ ⇒ x |= Lψ).
x |= Lφ ⇒ x |= φ.
(24)
(25)
(26)
(27)</p>
    </sec>
    <sec id="sec-29">
      <title>Our decision relation formula φ is vd ∈ A where A ⊆ Vd, i.e, A is a subset of the</title>
      <p>set of decision values; the formula x |= (vd ∈ A) reads that decision value proposed
for x is in the set A of decision values. Necessitation means stressing this hypothesis
by conforming it on all neighborhoods of x, and, possibility indicates possible sets of
values of decision for x.
7</p>
      <p>Appendix: Computational Aspects of Hyper–Granules
We consider an information system S = (U, A, V ) where U is a set of things, A is a
set of attributes,and, V is a set of attribute values. We will need the notion of a dual
information matrix S∗ defined as the triple (A, V, U ), where for each pair (a, v) the
entry in the cell S∗(a, v) is
{x ∈ U : a(x) = v}.
(28)</p>
      <p>Computing hyper–granules</p>
    </sec>
    <sec id="sec-30">
      <title>The Algorithm proceeds as follows.</title>
      <p>HYPER–GRANULE (U,A,V)
1. Form the dual information matrix S∗;</p>
    </sec>
    <sec id="sec-31">
      <title>2. For each x in U do</title>
      <p>3. if there is a cell (a, v) such that S∗(a, v) = {x}</p>
    </sec>
    <sec id="sec-32">
      <title>4. then remove x from all cells.</title>
    </sec>
    <sec id="sec-33">
      <title>6. Repeat steps 2-4 until</title>
    </sec>
    <sec id="sec-34">
      <title>7. all cells are either empty or each contains at least two things; 8. Return X =the set of all things that occur in at least one non–empty cell. Repeating the algorithm with the dual information matrix (A, V, U \ X ) we may eventually obtain further hyper–granules.</title>
      <p>The complexity of the algorithm is Θ(|A| · |V | · |U |) as we do take into account
neither the cost of inserting a symbol into a cell nor deleting it from a cell.
We consider an information system TEST in Fig. 4. The dual information matrix
TEST∗ is shown in Fig. 5. We have to remove from all cells the things 2, 5, 9. As each
remaining cell is either empty or some at least two–element set, the hyper–granule is
X = {1, 3, 4, 6, 7, 8, 10}. For the remaining subsystem TEST1=(A, V, {2, 5, 9}), we
obtain the dual information matrix TEST1∗ shown in Fig. 6. After steps 2–4 of the
algorithm, all cells are empty so there is no second hyper–granule: things 2, 5, 9 are outliers.
We would like to observe in addition that it is easy to read off from the dual
information matrix the coordinates of a thing in the representation space; e.g., the thing
4 can be represented as the vector [0, 0, 0, 0, 0, 0, 0, 31 , 0, 23 ] in the simplex spanned
on unit vectors representing, respectively, things 1, 2, ..., 10 in the vector space R10.
Hence, N (4) = {8, 10} is an irreducible neighborhood of the thing 4; another
irreducible neighborhood of 4 is {1, 10} with coordinates [ 21 , 0, ..., 0, 12 ].
7.1</p>
      <p>A Decision Assigning Algorithm
For a new test thing y, we consider in the universe U (the training set) the hyper–granule
H(y) ⊆ U ∪ {y} along with irreducible neighborhoods N1(y), . . . , Nk(y) (if there are
any). Let D(y) ⊆ Vd be the least set of decision values with the property that
If x ∈ [ Ni(y) then d(x) ∈ D(y).</p>
      <p>i
By (22), the hypothetical d(y) belongs in D(y). Now, given j ≤ k, the neighborhood
Nj (y) votes for decision value dj (y) in the manner as follows. For z ∈ Nj (y), we
denote with qj (z) the coordinate with which z enters the vector representing y with
respect to N (y). Then
(29)
(30)
(32)
where || − || is a metric chosen for Euclidean space containing Vd. Given a parameter
p ≤ k, we select p neighborhoods from among N1(y), N2(y), . . . , Nk(y) with greatest
values of qj = maxzqj (z) and let
d(y) = argminv∈D(y)||v − X
j≤p Pj≤p qj
qj
· dj (y)||.</p>
      <p>(31)
This approach may be regarded as a two–step variant of nearest neighbor classifier:
neighborhoods which are irreducible are at the same time closest to the thing; an
important factor not present in usual nearest neighbor classifiers is that neighborhoods
guarantee also that attribute values in the thing come from neighborhood members
which double stresses the similarity among the thing and neighborhood members.</p>
    </sec>
    <sec id="sec-35">
      <title>To give a procedure for computing neighbors of things, we define some useful notions, and,</title>
      <p>I(x, y) = {a ∈ A : a(x) = a(y)},
f (x, y) =
card(I(x, y))
card(A)
.
1
2
3
4
5
6
7
8
9
10
For X ⊆ A and x ∈ U , we let V (x, X) = the sequence a(x) : a ∈ X in the assigned
order of attributes; the symbol 0n denotes the sequence of length n of 0.</p>
      <p>Procedure Irreducible Neighborhood(x)</p>
    </sec>
    <sec id="sec-36">
      <title>Input: the thing x</title>
    </sec>
    <sec id="sec-37">
      <title>Output List of irreducible neighborhoods of x</title>
      <p>variable sets z, Az(x, y), nz(x, y), N (x, y), vz(x, y)
Initialization: N (x, y) ← ∅,
1. order coefficients f (x, y) in descending order
2. in descending order of f (x, y) do</p>
      <p>while Az(x, y) 6= A do
4. z ← {y}, A{y}(x, y) ← I(x, y), n{y}(x, y) ← {y}, v{y}(x, y) ←
[V (x, I(x, y)), 0|A|−|I(x,y)|]</p>
    </sec>
    <sec id="sec-38">
      <title>5. in descending order of f (x, z′), for each z′</title>
      <p>(A \ Az(x, y)) ∩ I(x, z′) 6= ∅ do
6.</p>
      <p>Az∪{z′}(x, y) ← Az(x, y) ∪ (A \ Az(x, y)) ∩ I(x, z′)),
∈/
z such that
nz∪{z′}(x, y) ← nz(x, y) ∪ {z′},
z ← z ∪ {z′},
p(z′) ← |V (x, (A \ Az(x, y)) ∩ I(x, z′))|
7. if Az(x, y) = A then</p>
      <p>N (x, y) ← N (x, y) ∪ {nz(x, y)}
vz∪{z′}(x, y) = vz(x, y) + [0|A|−|vz(x,y)|, V (x, (A
I(x, z′), 0|A|−|vz(x,y)|−|V (x,(A\Az(x,y))∩I(x,z′)|],
\</p>
      <p>Az(x, y))
∩
9. from Sy N (x, y) output the list of sets of minimal cardinality=the list of
irreducible neighborhoods of x
with respect to the irreducible neighborhood nz(x, y).</p>
      <p>|I(x,y)|
10. |I(x,y)|+P z′∈nz\{y} p(z′) is the coordinate of y in the representation vector of
p(z′)
x, and, |I(x,y)|+P z′∈nz\{y} p(z′) is the coordinate of z′ in the representation vector of x
As an example let us assign decision values to things 1, 2, . . . , 10 in TEST, in
Fig. 7.1. For simplicity, the thing 4 is regarded as a test thing. The set D(4) is {0, 1}.
Irreducible neighborhoods of 4 are of cardinality 2 and they are: N1(4) = {1, 10} with
coordinates [ 61 , 56 ]; N2(4) = {1, 10} with coordinates [ 46 , 62 ]; N3(4) = {7, 8} with
coordinates [ 62 , 46 ]; N4(4) = {8, 10} with coordinates [ 61 , 65 ].</p>
      <p>Now, N1(4) votes for 13 · 1 + 32 · 1 = 1 = d1(4);</p>
      <p>N2(4) votes for 64 · 0 + 26 · 1 → 0 = d2(4);
N3(4) votes for 62 · 1 + 46 · 1 = 1 = d3(4);</p>
      <p>N4(4) votes for 31 · 1 + 23 · 1 = 1 = d4(4).</p>
      <p>The final decision value is the nearest of 0, 1 to the value 32 ·1+ 56 ·1+ 64 ·0+ 64 ·1 = 1137 which
23 + 56 + 46 + 64
is d(4) = 1.
value
0 2 1 new thing 1 2 1 1 0 1</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>Van</given-names>
            <surname>Benthem</surname>
          </string-name>
          ,
          <string-name>
            <surname>J.</surname>
          </string-name>
          :
          <source>The Logic of Time. D. Reidel</source>
          , Dordrecht,
          <year>1983</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Pawlak</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          :
          <article-title>An inquiry into anatomy of conflicts</article-title>
          .
          <source>Journal of Information Sciences</source>
          <volume>109</volume>
          (
          <year>1998</year>
          ), pp
          <fpage>65</fpage>
          -
          <lpage>78</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Polkowski</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>Approximate Reasoning by Parts</article-title>
          . An Introduction to Rough Mereology. Springer Verlag, Berlin, ISRL vol.
          <volume>20</volume>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Tarski</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Givant</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Tarski's system of geometry</article-title>
          .
          <source>Bull. Symbolic Logic</source>
          <volume>5</volume>
          (
          <issue>2</issue>
          ),
          <year>June 1999</year>
          , pp
          <fpage>175</fpage>
          -
          <lpage>214</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>