<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Formal Concept Analysis Techniques Can Help in Intelligent Control, Deep Learning, etc.?</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>ik Kr</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>inovi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Science, University of Texas at El Paso 500 W. University</institution>
          ,
          <addr-line>El Paso, TX 79968</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <fpage>9</fpage>
      <lpage>18</lpage>
      <abstract>
        <p>In this paper, we show that formal concept analysis is a particular case of a more general problem that includes deriving rules for intelligent control, finding appropriate properties for deep learning algorithms, etc. Because of this, we believe that formal concept analysis techniques can be (and need to be) extended to these application areas as well. To show that such an extension is possible, we explain how these techniques can be applied to intelligent control.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>This correlation enables us also to effectively interpolate and extrapolate,
i.e., to adequately reconstruct missing information. For example, based on many
readings of temperature, wind speed, and other meteorological characteristics
ate several locations and heights, we can reasonable accurately reconstruct the
values of these characteristics at other locations and heights.</p>
      <p>Functions of one, two, etc. inputs. In many cases, we are interested in
characteristics q that depend only on one (possible, multi-dimensional) input x:
q = f (x). For example:
– we may be interested in the temperature q(x) at different locations and
different moments of time (i.e., at different points x in space-time),
– we may be interested in the income q(x) of different people x at different
moments of time, etc.</p>
      <p>However, in many other cases, we are interested in characteristics q(x, y, . . .)
that depend on two (or even more) different inputs x, y, . . . For example:
– we may be interested in the degree q(x, y) to which a given person x would
like or dislike a certain movie y (or a certain book, or a certain research
paper if x is a researcher),
– we may be interested in knowing the degree q(x, y) to which, in a given
situation x, different controls y will lead to good results, etc.</p>
      <p>In such situations, we need to compress, interpolate, and extrapolate the desired
dependence q(x, y, . . .).</p>
      <p>How can we compress, interpolate, and extrapolate multi-input
dependencies: general description. For simplicity, let us consider the case when the
desired quantity depends only on two inputs: q = q(x, y). In this case, in the
beginning,
– we have information about x and information about y, and
– we need to perform some processing of this information.</p>
      <p>A natural way to speed up data processing is to perform some operations in
parallel – just like for us humans, a natural way for a person to perform a task
faster is to have several helpers working at the same time on the same task.
So, if there are some computational steps where we can process x separately
and process y separately, these steps need to be performed in parallel before
everything else. Thus, in general, processing such data consists of the following
two major stages:
– first, we perform an appropriate processing on x, resulting in some values
a(x), and at the same time, we perform an appropriate processing on y,
producing b(y);
– after that, we perform some processing on the results a(x) and b(y) of the first
stage, producing F (a(x), b(y)), where F denotes the algorithm performed at
this second stage.
At the end, we approximate the original dependence q(x, y) with the
simpler-tostore and simpler-to-process dependence F (a(x), b(y)) ≈ q(x, y).</p>
      <p>Let us describe possible situations, from the simplest to the most complicated.
Linear case. The simplest case is when q(x, y) can be well approximated as a
linear function of the values a(x) = (a1(x), . . . , ak(x)), i.e., when
q(x, y) ≈ b0(y) + b1(y) · a1(x) + . . . + bk(y) · ak(y)
for some coefficients bi(y) depending on y. By adding a0(x) = 1, we can make
this formula more uniform</p>
      <p>q(x, y) = b0(y) · a0(x) + b1(y) · a1(x) + . . . + bk(y) · ak(y).</p>
      <p>
        In matrix notations qxy d=ef q(x, y), axi d=ef ai(x), and byi d=ef bi(y), this formula
takes the form
qxy =
k
X axi · byi.
i=0
(1)
This is a known idea of matrix decomposition, actively used in Principal
Component Analysis (see, e.g., [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]), in predicting people’s reaction to movies, etc.
(see, e.g., [
        <xref ref-type="bibr" rid="ref1 ref17">1, 17</xref>
        ]).
      </p>
      <p>A natural generalization of linear case, to operations generalizing
addition and multiplication. In the above case (1):
– we use multiplication to process individual components axi and bxi of the
results a(x) and b(y) of processing x and y, and
– we use addition to combine these results.</p>
      <p>Instead of multiplication and addition, we can use more general combination
functions.</p>
      <p>
        For example, we can have expert control rules of the type “if x satisfies the
property ai (e.g., if x &gt; 0.1), then the control y should satisfy the property bi
(e.g., y ∈ [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ])”. We can then combine these rules into an equivalent formula,
according to which y is a reasonable control for the situation x if:
– either the first rule is applicable, i.e., x satisfies the property a1 and y satisfies
the property b1,
– or the second rule is applicable, i.e., x satisfies the property a2 and y satisfies
the property b2, etc.
      </p>
      <p>If we:
– denote the truth value of the statement “x satisfies the property ai” by ai(x)
and
– denote the truth value of the statement “y satisfies the property bi” by bi(y),
then the truth value q(x, y) of the statement “y is a reasonable control for x”
takes the form</p>
      <p>q(x, y) = (a1(x) &amp; b1(y)) ∨ (a2(x) &amp; b2(y)) ∨ . . .</p>
      <p>
        This is the usual example of formal concept analysis; see, e.g., [
        <xref ref-type="bibr" rid="ref3 ref6">3, 6</xref>
        ]
      </p>
      <p>
        This example can be extended to the case when experts use imprecise
(“fuzzy”) words from natural language to describe their rules; see, e.g., see,
e.g., [
        <xref ref-type="bibr" rid="ref10 ref13 ref14 ref16 ref2 ref8">2, 8, 10, 13, 14, 16</xref>
        ]. In this case, the expert’s control rules have a similar
form “if x is ai (e.g., small), then the control y should be bi (e.g., moderate)”.
We can similarly translate these rules into an equivalent formula, according to
which y is a reasonable control for the situation x if:
      </p>
      <p>We can then ask the expert to estimate, on a scale from 0 to 1, the degrees ai(x)
to which different values x satisfy the imprecise (“fuzzy”) property ai and the
degrees bi(y) to which different values y satisfy the property bi.</p>
      <p>Since it is usually not practically possible to ask the expert to provide
estimates for the combined statement “x satisfies the property ai and y satisfies
the property bi” for all the pairs (x, y) – there are just too many possible pairs
– we have to estimate the degrees to which such statements are true based on
whatever information is available – namely, the degrees ai(x) and bi(y). For this
estimation, we can use a general algorithm f&amp;(a, b) for estimating our degree of
confidence in a composite statement A &amp; B based on our degrees of confidence
a and b in the statement A and B.</p>
      <p>This algorithm has to satisfy certain properties: e.g., since A &amp; B means the
same as B &amp; A, this operation must be commutative; since A &amp; (B &amp; C) is
equivalent to (A &amp; B) &amp; C, this operation must be associative, etc. Such operations
are known as “and”-operations, or, for historical reasons, t-norms.</p>
      <p>Similarly, we can use a similarly motivated “or”-operation (also known as
t-conorm) f∨(a, b) to estimate our degree of confidence in A ∨ B based on our
degrees of confidence a and b in the statements A and B. In these terms, the
desired degree of confidence q(x, y) can be described as follows:
q(x, y) = f∨(f&amp;(a1(x), b1(y)), f&amp;(a2(x), b2(y)), . . .).
(2)
Case when some transformations are linear, while others are not. So
far, we have considered the case when the functions F is linear – or similar to
linear, with more general operations instead of addition and multiplication.</p>
      <p>
        In some cases, a linear transformation is followed by a non-linear one. For
example, in a traditional 3-layer neural network (see, e.g., [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]), the result q of
processing the inputs x1, . . . , xn has the form
      </p>
      <p>K
q = X
k=1</p>
      <p>Wk · sk
n
X wki · xi − wki
i=1
!
− W0,
(3)
for some non-linear functions sk(z).
n
In other words, we first compute linear combinations ak(x) = P wki·xi−wki,
i=1
K
and then perform a non-linear transformation q = P sk(ak) − W0.
k=1
General case, when everything is possibly non-linear. In general, we
may have non-linear transformations a(x) and b(y), followed by a nonlinear
transformation F (a, b).</p>
      <p>
        A typical example of such a representation is deep learning (see, e.g., [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]),
where the dimension of the signal decreases as we go from the multi-D input
through processing layers, and thus, the original multi-dimensional signal is
compressed – and, in general, compressed non-linearly. Interestingly, in many
experiments, the intermediate results have intuitive meaning – so that the same
intermediate values can be used for other problems y.
2
      </p>
      <p>What Is the Remaining Problem and How Formal
Concept Analysis Techniques Can Help
General problem. If we have rules, and these rules are perfect, there is no
problem. However, this is rarely the case. In most practical situations, we have
some information about q(x, y), and we need to come up with the appropriate
decomposition into a(x), b(y), and F (a, b).</p>
      <p>– In the linear case of matrix decomposition, we have examples of people’s
attitude to different movies, and we need to come up with the most adequate
values axi and byi.
– In the case of formal concept analysis, we have a table (often only partially
filled) of truth values q(x, y), and we need to find appropriate predicates
ai(x) and bi(y).
– In the case of intelligent control, we have degrees q(x, y), and we need to
come up with appropriate rules – e.g., with the most appropriate functions
ai(x) and bi(y).
– In the case of deep learning, while there are spectacular successes – like
beating a world champion in Go, there are also spectacular failures, when
the system classifies a clearly cat picture as a dog and vice versa. This means
that even in this case, the problem of finding the appropriate values ai(x)
and bi(y) is far from being solved.</p>
      <p>
        How can formal concept analysis technique help. Of course, most of
the above problems are NP-hard (see, e.g., [
        <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
        ]), so we cannot expect to
find a feasible algorithm that always finds a solution. However, many efficient
techniques have been developed in formal concept analysis, and it is desirable to
extend them to other cases as well.
      </p>
      <p>
        Such an extension is clearly possible – e.g., the paper [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] provided an efficient
greedy algorithm for deriving fuzzy values when f&amp;(a, b) = max(a + b − 1, 0) and
f∨(a, b) = min(a + b, 1), and the paper [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] showed that the same algorithm can
work for other “and”- and “or”-operations as well. The unpublished result from
[
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] is briefly described in the Appendix.
      </p>
      <p>Conclusion. Our conclusion is simple and straightforward: let us thing big, let
us extend what we have to other cases.</p>
      <p>Acknowledgments
The author is greatly thankful to Radim Belohlavek, Marketa Krmelova, and
Martin Trnecka for their help and encouragement.
A</p>
      <p>How Formal Concept Analysis Can Help Extract
Intelligent Control Rules
We have a finite set of pairs (x, y) for which we know q(x, y). Let us denote this
set by P . Based on this information, how can we find appropriate functions ai(x)
and bi(y)?</p>
      <p>
        In this appendix, we show that in this general intelligent control case, we can
use a greedy algorithm that was originally proposed in [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] for a specific case of
“and”- and “or”-operations.
      </p>
      <p>By definition of a greedy algorithm:
– we start by finding the functions a1(x) and b1(y),
– then we fix the selected functions a1(x) and b1(y) and find the functions
a2(x) and b2(y), etc., and,
– in general, we fix the already selected functions a1(x), b1(y), . . . , ak−1(x),
bk−1(y), and select a pair of functions ak(x) and bk(x).</p>
    </sec>
    <sec id="sec-2">
      <title>How do we select these functions ak(x) and bk(y)?</title>
      <p>From the equation (2) and from the fact that p ≤ f∨(p, q) for all p and q, we
can conclude that for each k, we should have
f∨(f&amp;(a1(x), b1(y)), . . . , f&amp;(ak−1(x), bk−1(y)), f&amp;(ak(x), bk(y))) ≤ q(x, y),
qk−1(x, y) d=ef f∨(f&amp;(a1(x), b1(y)), . . . , f&amp;(ak−1(x), bk−1(y)))
for k − 1 ≥ 1 and q0(x, y) d=ef 0.</p>
      <p>A natural idea is to select the functions ak(x) and bk(y) that would cover as
many pairs (x, y) as possible, i.e., for which the value</p>
      <p>Nk(ak, bk) d=ef #{(x, y) ∈ P : f∨(qk−1(x, y), f&amp;(ak(x), bk(y))) = q(x, y)}
i.e., equivalently, that
where we denoted
f∨(qk−1(x, y), f&amp;(ak(x), bk(y))) ≤ q(x, y),
(4)
is the largest possible, where #S denoted the number of elements in the set S.</p>
      <p>From this viewpoint, once we selected bk(y), it is reasonable to select a
function ak(x) which leads to the largest possible coverage, i.e., to select
(bk)↓k (x) d=ef sup{a : ∀y ∈ Yx (f∨(qk−1(x, y), f&amp;(a, bk(y))) ≤ q(x, y))},
where Yx d=ef {x : (x, y) ∈ P }. Similarly, if we have selected the function ak(x),
then it is reasonable to select a function bk(y) which leads to the largest possible
coverage, i.e., to select</p>
      <p>(ak)↑k (y) d=ef sup{b : ∀x ∈ Xy (f∨(qk−1(x, y), f&amp;(ak(x), b)) ≤ q(x, y))},
where Xy d=ef {y : (x, y) ∈ P }.</p>
      <p>Comment. The notations similar to the usual notations from the formal concept
analysis are motivated by the fact that in the usual 2-valued logic:
– there are only two truth values 0 and 1;
– when s0 ≤ s, then s0 ∨ t ≤ s if and only if t ≤ s; and
– a &amp; b ≤ s if and only if a ≤ (b → s).</p>
      <p>By using these properties, one can check that in the 2-valued logic, the above
formulas can be represented in the following simplified equivalent form (not
depending on qk−1(x, y)):
(ak)↑k (y) = inf (ak(x) → a(x, y));</p>
      <p>x∈Xy
(bk)↓k (x) = inf (bk(y) → S(x, y)),</p>
      <p>y∈Yx
which are exactly the usual notions (ak)↑ and (bk)↓ in formal concept analysis.</p>
      <p>Let us now describe an iterative procedure for finding bk(y) and ak(x). In
the beginning, the only information that we know about bk(y) and ak(x) is
that bk(y) ≥ 0 and ak(x) ≥ 0. Thus, as the starting approximations to the
desired functions bk(y) and ak(x), we take b(k0)(y) = 0 and a(k0)(x) = 0. For these
functions, the quality Nk a(k0), b(k0) is simply equal to the previous value Nk−1.</p>
      <p>Let us now start improving this selection step by step. In general, let us
assume that we have already found approximations b(ki−1)(y) and a(ki−1)(x), for
which the approximation quality is equal to Nk a(i−1), b(ki−1) .
k</p>
      <p>If some pairs (x, y) are still not covered by this selection, we should try to
increase one of the functions b(ki−1)(y) and a(i−1)(x). Let us start with b(ki−1)(y).
k
The simplest idea is to increase the value b(ki−1)(y) for one of the value y0 to the
largest possible value</p>
      <p>by0 (y0) d=ef sup{b : ∀x ∈ Xy0 (f∨(qk−1(x, y0), f&amp;(a(ki−1)(x), b) ≤ q(x, y0))},
while keeping all other values b(ki−1)(y) unchanged: by0 (y) = b(ki−1)(y) for all y 6=
y0.</p>
      <p>For each y0, we form the resulting function by0 (y), and take ak,y0 = (by0 )↓k
and bk,y0 = (ak,y0 )↑k = (by0 )↓k↑k . For each y0, we find the value Nk(ak,y0 , bk,y0 )
of the objective function, and select ymax for which this value is the largest:
Nk(ak,ymax , bk,ymax ) = max Nk(ak,y0 , bk,y0 ).</p>
      <p>y0</p>
      <p>The corresponding functions bk,ymax (y) and ak,ymax (x) are then taken as the
next iteration b(ki)(y) and a(ki)(x): b(ki) = bk,ymax (y) and a(i) = ak,ymax (y).
Iterak
tions continue while the value Nk continues to grow, i.e., while</p>
    </sec>
    <sec id="sec-3">
      <title>Once it stops growing, i.e., once we have</title>
      <p>Nk a(ki), b(ki)
&gt; Nk a(i−1), b(ki−1) .</p>
      <p>k
Nk a(ki), b(ki)
= Nk a(i−1), b(ki−1) ,
k
the iterations stop, and the corresponding functions bk(y) d=ef b(ki)(y) and ak(x) d=ef
a(ki)(x) are added to the list of pairs</p>
      <p>(a1(x), b1(y)), . . . , (ak−1(x), bk−1(y)).</p>
      <p>If after this addition, some pairs (x, y) ∈ P are still not covered, we similarly
find and add the next pair of functions ak+1(x) and bk+1(y), etc., until all the
pairs (x, y) ∈ P are covered.</p>
      <p>Our preliminary experiments show that this algorithm leads to reasonable
rules.</p>
      <p>Comment. A similar greedy algorithm can be used when, instead of the
abovedescribed methodology, we use a more logical way to convey the expert’s if-then
rules, i.e., when we take</p>
      <p>q(x, y) = f&amp;(f→(a1(x), b1(y)), f→(a2(x), b2(y)), . . .),
where f→(a, b) is an implication operation, i.e., an estimate for degree to which
the statement A → B is true given the degrees of confidence a and b in the
statements A and B.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>G.</given-names>
            <surname>Acosta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hernandez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Villanueva-Rosales</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Smith</surname>
          </string-name>
          ,
          <string-name>
            <given-names>and V.</given-names>
            <surname>Kreinovich</surname>
          </string-name>
          , “
          <article-title>Why matrix factorization works well in recommender systems: a systems-based explanation”</article-title>
          ,
          <source>Journal of Uncertain Systems</source>
          ,
          <year>2019</year>
          , Vol.
          <volume>13</volume>
          , No.
          <issue>3</issue>
          , pp.
          <fpage>164</fpage>
          -
          <lpage>167</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>R.</given-names>
            <surname>Belohlavek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. W.</given-names>
            <surname>Dauben</surname>
          </string-name>
          , and
          <string-name>
            <given-names>G. J.</given-names>
            <surname>Klir</surname>
          </string-name>
          ,
          <source>Fuzzy Logic and Mathematics: A Historical Perspective</source>
          , Oxford University Press, New York,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>R.</given-names>
            <surname>Belohlavek</surname>
          </string-name>
          and
          <string-name>
            <given-names>V.</given-names>
            <surname>Vychodil</surname>
          </string-name>
          , “
          <article-title>Discovery of optimal factors in binary data via a novel method of matrix decomposition”</article-title>
          ,
          <source>Journal of Computer and System Sciences</source>
          ,
          <year>2010</year>
          , Vol.
          <volume>76</volume>
          , No.
          <issue>1</issue>
          , pp.
          <fpage>3</fpage>
          -
          <lpage>20</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>R.</given-names>
            <surname>Belohlavek</surname>
          </string-name>
          and
          <string-name>
            <given-names>V.</given-names>
            <surname>Vychodil</surname>
          </string-name>
          , “
          <article-title>Factor analysis of incidence data via novel decomposition of matrices”</article-title>
          ,
          <source>Proceedings of the 7th International Conference on Formal Concept Analysis ICFCA'2009</source>
          , Darmsdart, Germany, May 21-24,
          <year>2009</year>
          ,
          <source>Springer Lecture Notes in Artifical Intelligence</source>
          , Vol.
          <volume>5548</volume>
          , pp.
          <fpage>83</fpage>
          -
          <lpage>97</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>C. M. Bishop</surname>
          </string-name>
          ,
          <source>Pattern Recognition and Machine Learning</source>
          , Springer, New York,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>B.</given-names>
            <surname>Ganter</surname>
          </string-name>
          and
          <string-name>
            <given-names>R.</given-names>
            <surname>Wille</surname>
          </string-name>
          ,
          <article-title>Formal Concept Analysis</article-title>
          .
          <source>Mathematical Foundations</source>
          , Springer, Berlin,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>I.</given-names>
            <surname>Goodfellow</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Bengio</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Courville</surname>
          </string-name>
          , Deep Leaning, MIT Press, Cambridge, Massachusetts,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>G.</given-names>
            <surname>Klir</surname>
          </string-name>
          and
          <string-name>
            <given-names>B.</given-names>
            <surname>Yuan</surname>
          </string-name>
          , Fuzzy Sets and
          <string-name>
            <given-names>Fuzzy</given-names>
            <surname>Logic</surname>
          </string-name>
          , Prentice Hall, Upper Saddle River, New Jersey,
          <year>1995</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <given-names>M.</given-names>
            <surname>Krmelova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Belohlavek</surname>
          </string-name>
          , and
          <string-name>
            <given-names>V.</given-names>
            <surname>Kreinovich</surname>
          </string-name>
          ,
          <article-title>Fuzzy Formal Concept Analysis Can Help Extract Rules from Experts</article-title>
          , unpublished paper,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>J. M. Mendel</surname>
          </string-name>
          ,
          <source>Uncertain Rule-Based Fuzzy Systems: Introduction and New Directions</source>
          , Springer, Cham, Switzerland,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11. D. S. Nau, Specificity Covering, Duke University, Department of Computer Science,
          <source>Technical Report CS-1976-7</source>
          ,
          <year>1976</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <given-names>D. S.</given-names>
            <surname>Nau</surname>
          </string-name>
          , G. Markowsky,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Woodbury</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D. B.</given-names>
            <surname>Amos</surname>
          </string-name>
          , “
          <article-title>A mathematical analysis of human leukocyte antigen serology”</article-title>
          ,
          <source>Mathematical Biosciences</source>
          ,
          <year>1978</year>
          , Vol.
          <volume>40</volume>
          , pp.
          <fpage>243</fpage>
          -
          <lpage>270</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13. H. T. Nguyen,
          <string-name>
            <given-names>C. L.</given-names>
            <surname>Walker</surname>
          </string-name>
          , and
          <string-name>
            <given-names>E. A.</given-names>
            <surname>Walker</surname>
          </string-name>
          ,
          <string-name>
            <surname>A First</surname>
          </string-name>
          <article-title>Course in Fuzzy Logic, Chapman</article-title>
          and Hall/CRC,
          <string-name>
            <surname>Boca</surname>
            <given-names>Raton</given-names>
          </string-name>
          , Florida,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14. V.
          <article-title>Nova´k, I. Perfilieva, and</article-title>
          <string-name>
            <surname>J. Moˇckoˇr</surname>
          </string-name>
          , Mathematical Principles of Fuzzy Logic, Kluwer, Boston, Dordrecht,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>D. J. Sheskin</surname>
          </string-name>
          ,
          <article-title>Handbook of Parametric and Nonparametric Statistical Procedures, Chapman</article-title>
          and Hall/CRC,
          <string-name>
            <surname>Boca</surname>
            <given-names>Raton</given-names>
          </string-name>
          , Florida,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <given-names>L. A.</given-names>
            <surname>Zadeh</surname>
          </string-name>
          , “Fuzzy sets”,
          <source>Information and Control</source>
          ,
          <year>1965</year>
          , Vol.
          <volume>8</volume>
          , pp.
          <fpage>338</fpage>
          -
          <lpage>353</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>C. Zhao</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Sun</surname>
            , L. Han,
            <given-names>Q</given-names>
          </string-name>
          . Peng, “
          <article-title>Hydrid matrix factorization for recommender systems in social networks”</article-title>
          ,
          <source>International Journal on Neural and Mass-Parallel Computing and Information Systems</source>
          ,
          <year>2016</year>
          , Vol.
          <volume>26</volume>
          , No.
          <issue>6</issue>
          , pp.
          <fpage>559</fpage>
          -
          <lpage>569</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>