<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Formal Concept Analysis Techniques Can Help in Intelligent Control, Deep Learning, etc</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Vladik</forename><surname>Kreinovich</surname></persName>
							<email>vladik@utep.edu</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Computer Science</orgName>
								<orgName type="institution">University of Texas at El Paso</orgName>
								<address>
									<addrLine>500 W. University</addrLine>
									<postCode>79968</postCode>
									<settlement>El Paso</settlement>
									<region>TX</region>
									<country key="US">USA</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Formal Concept Analysis Techniques Can Help in Intelligent Control, Deep Learning, etc</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">8C8DAEDF21C9245B58D0B7C483462D79</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-25T06:28+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>In this paper, we show that formal concept analysis is a particular case of a more general problem that includes deriving rules for intelligent control, finding appropriate properties for deep learning algorithms, etc. Because of this, we believe that formal concept analysis techniques can be (and need to be) extended to these application areas as well. To show that such an extension is possible, we explain how these techniques can be applied to intelligent control.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">How Formal Concept Analysis Fits into the General Scheme of Being</head><p>General need for compression. The current world is filled with data. At any given moment of time, numerous sensors produce humongous amount of numerical measurement results, images, videos, etc. This data is usually useful -otherwise it would not have been produced, so it is desirable to store and process this information.</p><p>However, the problem is that our ability to produce information way exceeds our ability to store and process it. As a result, we cannot physically store every bit produced by the sensors, we need to compress this data.</p><p>Compression is closely related to interpolation and extrapolation. Of course, if all the bits were independent, if each was informative -containing information that cannot be extracted from other bits -there would be no way to drastically compress this information without losing a significant portion of it. And this would make producing this immediately deleted information useless.</p><p>Good news is that bits are not independent, there is usually a strong correlation between them, correlation that allows us to drastically decrease the number of stored bits without losing much information. For example, a photo can be easily compressed from several Megabytes to dozens of Kylobytes -several orders of magnitude -and we can still easily recognize all the features of a person on the webpage where this compressed photo is posted.</p><p>This correlation enables us also to effectively interpolate and extrapolate, i.e., to adequately reconstruct missing information. For example, based on many readings of temperature, wind speed, and other meteorological characteristics ate several locations and heights, we can reasonable accurately reconstruct the values of these characteristics at other locations and heights.</p><p>Functions of one, two, etc. inputs. In many cases, we are interested in characteristics q that depend only on one (possible, multi-dimensional) input x: q = f (x). For example:</p><p>-we may be interested in the temperature q(x) at different locations and different moments of time (i.e., at different points x in space-time), -we may be interested in the income q(x) of different people x at different moments of time, etc.</p><p>However, in many other cases, we are interested in characteristics q(x, y, . . .) that depend on two (or even more) different inputs x, y, . . . For example:</p><p>-we may be interested in the degree q(x, y) to which a given person x would like or dislike a certain movie y (or a certain book, or a certain research paper if x is a researcher), -we may be interested in knowing the degree q(x, y) to which, in a given situation x, different controls y will lead to good results, etc.</p><p>In such situations, we need to compress, interpolate, and extrapolate the desired dependence q(x, y, . . .).</p><p>How can we compress, interpolate, and extrapolate multi-input dependencies: general description. For simplicity, let us consider the case when the desired quantity depends only on two inputs: q = q(x, y). In this case, in the beginning,</p><p>-we have information about x and information about y, and -we need to perform some processing of this information.</p><p>A natural way to speed up data processing is to perform some operations in parallel -just like for us humans, a natural way for a person to perform a task faster is to have several helpers working at the same time on the same task. So, if there are some computational steps where we can process x separately and process y separately, these steps need to be performed in parallel before everything else. Thus, in general, processing such data consists of the following two major stages:</p><p>-first, we perform an appropriate processing on x, resulting in some values a(x), and at the same time, we perform an appropriate processing on y, producing b(y); -after that, we perform some processing on the results a(x) and b(y) of the first stage, producing F (a(x), b(y)), where F denotes the algorithm performed at this second stage.</p><p>At the end, we approximate the original dependence q(x, y) with the simpler-tostore and simpler-to-process dependence F (a(x), b(y)) ≈ q(x, y).</p><p>Let us describe possible situations, from the simplest to the most complicated.</p><p>Linear case. The simplest case is when q(x, y) can be well approximated as a linear function of the values a(x) = (a 1 (x), . . . , a k (x)), i.e., when</p><formula xml:id="formula_0">q(x, y) ≈ b 0 (y) + b 1 (y) • a 1 (x) + . . . + b k (y) • a k (y)</formula><p>for some coefficients b i (y) depending on y. By adding a 0 (x) = 1, we can make this formula more uniform</p><formula xml:id="formula_1">q(x, y) = b 0 (y) • a 0 (x) + b 1 (y) • a 1 (x) + . . . + b k (y) • a k (y).</formula><p>In matrix notations q xy def = q(x, y), a xi def = a i (x), and b yi def = b i (y), this formula takes the form</p><formula xml:id="formula_2">q xy = k i=0 a xi • b yi .<label>(1)</label></formula><p>This is a known idea of matrix decomposition, actively used in Principal Component Analysis (see, e.g., <ref type="bibr" target="#b14">[15]</ref>), in predicting people's reaction to movies, etc.</p><p>(see, e.g., <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b16">17]</ref>).</p><p>A natural generalization of linear case, to operations generalizing addition and multiplication. In the above case (1):</p><p>-we use multiplication to process individual components a xi and b xi of the results a(x) and b(y) of processing x and y, and -we use addition to combine these results.</p><p>Instead of multiplication and addition, we can use more general combination functions.</p><p>For example, we can have expert control rules of the type "if x satisfies the property a i (e.g., if x &gt; 0.1), then the control y should satisfy the property b i (e.g., y ∈ [0, 1])". We can then combine these rules into an equivalent formula, according to which y is a reasonable control for the situation x if:</p><p>-either the first rule is applicable, i.e., x satisfies the property a 1 and y satisfies the property b 1 , -or the second rule is applicable, i.e., x satisfies the property a 2 and y satisfies the property b 2 , etc.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>If we:</head><p>-denote the truth value of the statement "x satisfies the property a i " by a i (x) and -denote the truth value of the statement "y satisfies the property b i " by b i (y),</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Formal Concept Analysis Techniques Can Help in Intelligent Control, Deep</head><p>Learning, etc.</p><p>then the truth value q(x, y) of the statement "y is a reasonable control for x" takes the form</p><formula xml:id="formula_3">q(x, y) = (a 1 (x) &amp; b 1 (y)) ∨ (a 2 (x) &amp; b 2 (y)) ∨ .</formula><p>. . This is the usual example of formal concept analysis; see, e.g., <ref type="bibr" target="#b2">[3,</ref><ref type="bibr" target="#b5">6]</ref> This example can be extended to the case when experts use imprecise ("fuzzy") words from natural language to describe their rules; see, e.g., see, e.g., <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b7">8,</ref><ref type="bibr" target="#b9">10,</ref><ref type="bibr" target="#b12">13,</ref><ref type="bibr" target="#b13">14,</ref><ref type="bibr" target="#b15">16]</ref>. In this case, the expert's control rules have a similar form "if x is a i (e.g., small), then the control y should be b i (e.g., moderate)". We can similarly translate these rules into an equivalent formula, according to which y is a reasonable control for the situation x if:</p><p>-either the first rule is applicable, i.e., x satisfies the property a 1 and y satisfies the property b 1 , -or the second rule is applicable, i.e., x satisfies the property a 2 and y satisfies the property b 2 , etc.</p><p>We can then ask the expert to estimate, on a scale from 0 to 1, the degrees a i (x) to which different values x satisfy the imprecise ("fuzzy") property a i and the degrees b i (y) to which different values y satisfy the property b i . Since it is usually not practically possible to ask the expert to provide estimates for the combined statement "x satisfies the property a i and y satisfies the property b i " for all the pairs (x, y) -there are just too many possible pairs -we have to estimate the degrees to which such statements are true based on whatever information is available -namely, the degrees a i (x) and b i (y). Similarly, we can use a similarly motivated "or"-operation (also known as t-conorm) f ∨ (a, b) to estimate our degree of confidence in A ∨ B based on our degrees of confidence a and b in the statements A and B. In these terms, the desired degree of confidence q(x, y) can be described as follows:</p><formula xml:id="formula_4">q(x, y) = f ∨ (f &amp; (a 1 (x), b 1 (y)), f &amp; (a 2 (x), b 2 (y)), . . .).</formula><p>(</p><formula xml:id="formula_5">)<label>2</label></formula><p>Case when some transformations are linear, while others are not. So far, we have considered the case when the functions F is linear -or similar to linear, with more general operations instead of addition and multiplication.</p><p>In some cases, a linear transformation is followed by a non-linear one. For example, in a traditional 3-layer neural network (see, e.g., <ref type="bibr" target="#b4">[5]</ref>), the result q of processing the inputs x 1 , . . . , x n has the form</p><formula xml:id="formula_6">q = K k=1 W k • s k n i=1 w ki • x i − w ki − W 0 ,<label>(3)</label></formula><p>for some non-linear functions s k (z).</p><p>In other words, we first compute linear combinations</p><formula xml:id="formula_7">a k (x) = n i=1 w ki •x i −w ki ,</formula><p>and then perform a non-linear transformation q</p><formula xml:id="formula_8">= K k=1 s k (a k ) − W 0 .</formula><p>General case, when everything is possibly non-linear. In general, we may have non-linear transformations a(x) and b(y), followed by a nonlinear transformation F (a, b).</p><p>A typical example of such a representation is deep learning (see, e.g., <ref type="bibr" target="#b6">[7]</ref>), where the dimension of the signal decreases as we go from the multi-D input through processing layers, and thus, the original multi-dimensional signal is compressed -and, in general, compressed non-linearly. Interestingly, in many experiments, the intermediate results have intuitive meaning -so that the same intermediate values can be used for other problems y.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">What Is the Remaining Problem and How Formal Concept Analysis Techniques Can Help</head><p>General problem. If we have rules, and these rules are perfect, there is no problem. However, this is rarely the case. In most practical situations, we have some information about q(x, y), and we need to come up with the appropriate decomposition into a(x), b(y), and F (a, b).</p><p>-In the linear case of matrix decomposition, we have examples of people's attitude to different movies, and we need to come up with the most adequate values a xi and b yi . -In the case of formal concept analysis, we have a table (often only partially filled) of truth values q(x, y), and we need to find appropriate predicates a i (x) and b i (y). -In the case of intelligent control, we have degrees q(x, y), and we need to come up with appropriate rules -e.g., with the most appropriate functions a i (x) and b i (y). -In the case of deep learning, while there are spectacular successes -like beating a world champion in Go, there are also spectacular failures, when the system classifies a clearly cat picture as a dog and vice versa. This means that even in this case, the problem of finding the appropriate values a i (x) and b i (y) is far from being solved.</p><p>How can formal concept analysis technique help. Of course, most of the above problems are NP-hard (see, e.g., <ref type="bibr" target="#b10">[11,</ref><ref type="bibr" target="#b11">12]</ref>), so we cannot expect to find a feasible algorithm that always finds a solution. However, many efficient techniques have been developed in formal concept analysis, and it is desirable to extend them to other cases as well. Such an extension is clearly possible -e.g., the paper <ref type="bibr" target="#b3">[4]</ref> provided an efficient greedy algorithm for deriving fuzzy values when f &amp; (a, b) = max(a + b − 1, 0) and f ∨ (a, b) = min(a + b, 1), and the paper <ref type="bibr" target="#b8">[9]</ref> showed that the same algorithm can work for other "and"-and "or"-operations as well. The unpublished result from <ref type="bibr" target="#b8">[9]</ref> is briefly described in the Appendix.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Conclusion.</head><p>Our conclusion is simple and straightforward: let us thing big, let us extend what we have to other cases.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>A How Formal Concept Analysis Can Help Extract Intelligent Control Rules</head><p>We have a finite set of pairs (x, y) for which we know q(x, y). Let us denote this set by P . Based on this information, how can we find appropriate functions a i (x) and b i (y)?</p><p>In this appendix, we show that in this general intelligent control case, we can use a greedy algorithm that was originally proposed in <ref type="bibr" target="#b3">[4]</ref> for a specific case of "and"-and "or"-operations.</p><p>By definition of a greedy algorithm:</p><p>-we start by finding the functions a 1 (x) and b 1 (y), -then we fix the selected functions a 1 (x) and b 1 (y) and find the functions a 2 (x) and b 2 (y), etc., and, -in general, we fix the already selected functions a 1 (x), b 1 (y), . . . , a k−1 (x), b k−1 (y), and select a pair of functions a k (x) and b k (x).</p><p>How do we select these functions a k (x) and b k (y)? From the equation ( <ref type="formula" target="#formula_5">2</ref>) and from the fact that p ≤ f ∨ (p, q) for all p and q, we can conclude that for each k, we should have</p><formula xml:id="formula_9">f ∨ (f &amp; (a 1 (x), b 1 (y)), . . . , f &amp; (a k−1 (x), b k−1 (y)), f &amp; (a k (x), b k (y))) ≤ q(x, y), i.e., equivalently, that f ∨ (q k−1 (x, y), f &amp; (a k (x), b k (y))) ≤ q(x, y),<label>(4)</label></formula><p>where we denoted</p><formula xml:id="formula_10">q k−1 (x, y) def = f ∨ (f &amp; (a 1 (x), b 1 (y)), . . . , f &amp; (a k−1 (x), b k−1 (y)))</formula><p>for k − 1 ≥ 1 and q 0 (x, y) def = 0. A natural idea is to select the functions a k (x) and b k (y) that would cover as many pairs (x, y) as possible, i.e., for which the value</p><formula xml:id="formula_11">N k (a k , b k ) def = #{(x, y) ∈ P : f ∨ (q k−1 (x, y), f &amp; (a k (x), b k (y))) = q(x, y)} Formal Concept Analysis Techniques Can Help in Intelligent Control, Deep</formula><p>Learning, etc.</p><p>is the largest possible, where #S denoted the number of elements in the set S.</p><p>From this viewpoint, once we selected b k (y), it is reasonable to select a function a k (x) which leads to the largest possible coverage, i.e., to select</p><formula xml:id="formula_12">(b k ) ↓ k (x) def = sup{a : ∀y ∈ Y x (f ∨ (q k−1 (x, y), f &amp; (a, b k (y))) ≤ q(x, y))},</formula><p>where Y x def = {x : (x, y) ∈ P }. Similarly, if we have selected the function a k (x), then it is reasonable to select a function b k (y) which leads to the largest possible coverage, i.e., to select</p><formula xml:id="formula_13">(a k ) ↑ k (y) def = sup{b : ∀x ∈ X y (f ∨ (q k−1 (x, y), f &amp; (a k (x), b)) ≤ q(x, y))},</formula><p>where X y def = {y : (x, y) ∈ P }.</p><p>Comment. The notations similar to the usual notations from the formal concept analysis are motivated by the fact that in the usual 2-valued logic:</p><p>-there are only two truth values 0 and 1; -when s 0 ≤ s, then s 0 ∨ t ≤ s if and only if t ≤ s; and -a &amp; b ≤ s if and only if a ≤ (b → s).</p><p>By using these properties, one can check that in the 2-valued logic, the above formulas can be represented in the following simplified equivalent form (not depending on q k−1 (x, y)): Let us now describe an iterative procedure for finding b k (y) and a k (x). In the beginning, the only information that we know about b k (y) and a k (x) is that b k (y) ≥ 0 and a k (x) ≥ 0. Thus, as the starting approximations to the desired functions b k (y) and a k (x), we take b   If after this addition, some pairs (x, y) ∈ P are still not covered, we similarly find and add the next pair of functions a k+1 (x) and b k+1 (y), etc., until all the pairs (x, y) ∈ P are covered.</p><formula xml:id="formula_14">(a k ) ↑ k (y) = inf</formula><formula xml:id="formula_15">y0 (y 0 ) def = sup{b : ∀x ∈ X y0 (f ∨ (q k−1 (x, y 0 ), f &amp; (a (i−1) k (x), b) ≤ q(x, y 0 ))},</formula><p>Our preliminary experiments show that this algorithm leads to reasonable rules.</p><p>Comment. A similar greedy algorithm can be used when, instead of the abovedescribed methodology, we use a more logical way to convey the expert's if-then rules, i.e., when we take q(x, y) = f &amp; (f → (a 1 (x), b 1 (y)), f → (a 2 (x), b 2 (y)), . . .), where f → (a, b) is an implication operation, i.e., an estimate for degree to which the statement A → B is true given the degrees of confidence a and b in the statements A and B.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head></head><label></label><figDesc>For this estimation, we can use a general algorithm f &amp; (a, b) for estimating our degree of confidence in a composite statement A &amp; B based on our degrees of confidence a and b in the statement A and B.This algorithm has to satisfy certain properties: e.g., since A &amp; B means the same as B &amp; A, this operation must be commutative; sinceA &amp; (B &amp; C) is equivalent to (A &amp; B) &amp; C,this operation must be associative, etc. Such operations are known as "and"-operations, or, for historical reasons, t-norms.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head></head><label></label><figDesc>x∈Xy(a k (x) → a(x, y));(b k ) ↓ k (x) = inf y∈Yx (b k (y) → S(x, y)),which are exactly the usual notions (a k ) ↑ and (b k ) ↓ in formal concept analysis.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head></head><label></label><figDesc>to the previous value N k−1 . Let us now start improving this selection step by step. In general, let us assume that we have already found approximations b (i−1) k (y) and a (i−1) k (x), for which the approximation quality is equal to N k a pairs (x, y) are still not covered by this selection, we should try to increase one of the functions b (i−1) k (y) and a (i−1) k (x). Let us start with b (i−1) k (y). The simplest idea is to increase the value b (i−1) k (y) for one of the value y 0 to the largest possible value b</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head></head><label></label><figDesc>while keeping all other values b(i−1) k (y) unchanged: b y0 (y) = b (i−1) k (y) for all y = y 0 .For each y 0 , we form the resulting function b y0 (y), and takea k,y0 = (b y0 ) ↓ k and b k,y0 = (a k,y0 ) ↑ k = (b y0 ) ↓ k ↑ k .For each y 0 , we find the value N k (a k,y0 , b k,y0 ) of the objective function, and select y max for which this value is the largest:N k (a k,ymax , b k,ymax ) = max y0 N k (a k,y0 , b k,y0 ).The corresponding functions b k,ymax (y) and a k,ymax (x) are then taken as the next iteration b</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>k.,</head><label></label><figDesc>= b k,ymax (y) and a (i) k = a k,ymax (y). Iterations continue while the value N k continues to grow, i.e., while N k a Once it stops growing, i.e., once we have N k a the iterations stop, and the corresponding functions b k (y) def = b (i) k (y) and a k (x) def = a (i) k (x) are added to the list of pairs (a 1 (x), b 1 (y)), . . . , (a k−1 (x), b k−1 (y)).</figDesc></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>The author is greatly thankful to Radim Belohlavek, Marketa Krmelova, and Martin Trnecka for their help and encouragement.</p></div>
			</div>


			<div type="funding">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This work was supported in part by the National Science Foundation grants 1623190 (A Model of Change for Preparing a New Generation for Professional Practice in Computer Science) and HRD-1242122 (Cyber-ShARE Center of Excellence).</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Why matrix factorization works well in recommender systems: a systems-based explanation</title>
		<author>
			<persName><forename type="first">G</forename><surname>Acosta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Hernandez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Villanueva-Rosales</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Smith</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Kreinovich</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Uncertain Systems</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="164" to="167" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<author>
			<persName><forename type="first">R</forename><surname>Belohlavek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">W</forename><surname>Dauben</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">J</forename><surname>Klir</surname></persName>
		</author>
		<title level="m">Fuzzy Logic and Mathematics: A Historical Perspective</title>
				<meeting><address><addrLine>New York</addrLine></address></meeting>
		<imprint>
			<publisher>Oxford University Press</publisher>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Discovery of optimal factors in binary data via a novel method of matrix decomposition</title>
		<author>
			<persName><forename type="first">R</forename><surname>Belohlavek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Vychodil</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Computer and System Sciences</title>
		<imprint>
			<biblScope unit="volume">76</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="3" to="20" />
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Factor analysis of incidence data via novel decomposition of matrices</title>
		<author>
			<persName><forename type="first">R</forename><surname>Belohlavek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Vychodil</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 7th International Conference on Formal Concept Analysis ICFCA&apos;2009</title>
		<title level="s">Springer Lecture Notes in Artifical Intelligence</title>
		<meeting>the 7th International Conference on Formal Concept Analysis ICFCA&apos;2009<address><addrLine>Darmsdart, Germany</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2009">May 21-24, 2009</date>
			<biblScope unit="volume">5548</biblScope>
			<biblScope unit="page" from="83" to="97" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<title level="m" type="main">Pattern Recognition and Machine Learning</title>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">M</forename><surname>Bishop</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2006">2006</date>
			<publisher>Springer</publisher>
			<pubPlace>New York</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Formal Concept Analysis</title>
		<author>
			<persName><forename type="first">B</forename><surname>Ganter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Wille</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Mathematical Foundations</title>
				<meeting><address><addrLine>Berlin</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="1999">1999</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<title level="m" type="main">Deep Leaning</title>
		<author>
			<persName><forename type="first">I</forename><surname>Goodfellow</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Bengio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Courville</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2016">2016</date>
			<publisher>MIT Press</publisher>
			<pubPlace>Cambridge, Massachusetts</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<title level="m" type="main">Fuzzy Sets and Fuzzy Logic</title>
		<author>
			<persName><forename type="first">G</forename><surname>Klir</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Yuan</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1995">1995</date>
			<publisher>Prentice Hall</publisher>
			<pubPlace>Upper Saddle River, New Jersey</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<title level="m" type="main">Fuzzy Formal Concept Analysis Can Help Extract Rules from Experts</title>
		<author>
			<persName><forename type="first">M</forename><surname>Krmelova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Belohlavek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Kreinovich</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
	<note>unpublished paper</note>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">M</forename><surname>Mendel</surname></persName>
		</author>
		<title level="m">Uncertain Rule-Based Fuzzy Systems: Introduction and New Directions</title>
				<meeting><address><addrLine>Cham, Switzerland</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">S</forename><surname>Nau</surname></persName>
		</author>
		<idno>CS-1976-7</idno>
		<title level="m">Specificity Covering</title>
				<imprint>
			<date type="published" when="1976">1976</date>
		</imprint>
		<respStmt>
			<orgName>Duke University ; Department of Computer Science,</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Technical Report</note>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">A mathematical analysis of human leukocyte antigen serology</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">S</forename><surname>Nau</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Markowsky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Woodbury</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">B</forename><surname>Amos</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Mathematical Biosciences</title>
		<imprint>
			<biblScope unit="volume">40</biblScope>
			<biblScope unit="page" from="243" to="270" />
			<date type="published" when="1978">1978</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<title level="m" type="main">A First Course in Fuzzy Logic</title>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">T</forename><surname>Nguyen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">L</forename><surname>Walker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">A</forename><surname>Walker</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2019">2019</date>
			<publisher>Chapman and Hall/CRC</publisher>
			<pubPlace>Boca Raton, Florida</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<title level="m" type="main">Mathematical Principles of Fuzzy Logic</title>
		<author>
			<persName><forename type="first">V</forename><surname>Novák</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Perfilieva</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Močkoř</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1999">1999</date>
			<publisher>Kluwer</publisher>
			<pubPlace>Boston, Dordrecht</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<monogr>
		<title level="m" type="main">Handbook of Parametric and Nonparametric Statistical Procedures</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">J</forename><surname>Sheskin</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2011">2011</date>
			<publisher>Chapman and Hall/CRC</publisher>
			<pubPlace>Boca Raton, Florida</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Fuzzy sets</title>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">A</forename><surname>Zadeh</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Information and Control</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="page" from="338" to="353" />
			<date type="published" when="1965">1965</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Hydrid matrix factorization for recommender systems in social networks</title>
		<author>
			<persName><forename type="first">C</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Sun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Han</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Peng</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal on Neural and Mass-Parallel Computing and Information Systems</title>
		<imprint>
			<biblScope unit="volume">26</biblScope>
			<biblScope unit="issue">6</biblScope>
			<biblScope unit="page" from="559" to="569" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
