<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">On concept lattices and implication bases from reduced contexts</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Vaclav</forename><surname>Snasel</surname></persName>
							<email>vaclav.snasel@vsb.cz</email>
							<affiliation key="aff0">
								<orgName type="institution">VSB Technical University Ostrava</orgName>
								<address>
									<country key="CZ">Czech Republic</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Martin</forename><surname>Polovincak</surname></persName>
							<email>martin.polovincak.fei@vsb.cz</email>
							<affiliation key="aff0">
								<orgName type="institution">VSB Technical University Ostrava</orgName>
								<address>
									<country key="CZ">Czech Republic</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Hussam</forename><forename type="middle">M</forename><surname>Dahwa</surname></persName>
							<email>hussam.dahwa@vsb.cz</email>
							<affiliation key="aff0">
								<orgName type="institution">VSB Technical University Ostrava</orgName>
								<address>
									<country key="CZ">Czech Republic</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Zdenek</forename><surname>Horak</surname></persName>
							<email>zdenek.horak.st4@vsb.cz</email>
							<affiliation key="aff0">
								<orgName type="institution">VSB Technical University Ostrava</orgName>
								<address>
									<country key="CZ">Czech Republic</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">On concept lattices and implication bases from reduced contexts</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">E66EE6468A72918FD737233210B1CEC1</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T01:39+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Our paper introduces well-known methods for compressing formal context and focuses on concept lattices and attribute implication base changes of compressed formal contexts. In this paper Singular Value Decomposition and Non-negative Matrix Factorisation methods for compressing formal context are discussed. Computing concept lattices from reduced formal contexts results in a smaller number of concepts (with respect to the original lattice). Similarly, we present results of experiments in which we show a way to control smoothly the size of generated Guigues-Duquenne bases and provide some noise resistance for the basis construction process.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>In this paper we are dealing with approaches to obtain concept lattices and attribute implication bases from binary data tables using methods of matrix decomposition. Matrix decomposition methods are well-known in the area of information retrieval under the name Latent Semantic Indexing (LSI) or Latent Semantic Analysis (LSA) ( <ref type="bibr" target="#b0">[1]</ref>). LSI and LSA have been used for discovery of latent dependencies between terms (or documents). We would like to apply this approach in the area of formal concept analysis (FCA). The goal is to minimise input data before construction of the concept lattices and implication bases, which will result in reduced computational time.</p><p>Bases of attribute implications are an interesting form of knowledge extraction, because they are human-readable, convey all information from the data source, and still are as small as possible. Since they are in the basic form very exact, they are also vulnerable to noise in the data and we have almost no control over the resulting number of implications in the bases. Reducing the data to a lower dimension and reconstructing them could help us solve both previous problems. The scalability and computational tractability of FCA are a frequent problem; see, for example, <ref type="bibr" target="#b8">[9]</ref> for references. Relevant experiments can be found also in <ref type="bibr" target="#b9">[10]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Basic notions</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">Formal concept analysis</head><p>Formal Concept Analysis (FCA) was first introduced by Rudolf Wille in 1980. FCA is based on the philosophical understanding of the world in terms of objects and attributes. It is assumed that a relation exists to connect objects to the attributes they possess. Formal context and formal concept are the fundamental notions of FCA <ref type="bibr" target="#b1">[2]</ref>, <ref type="bibr" target="#b2">[3]</ref>.</p><p>A formal context C = (G, M, I) consists of two sets, G and M , with I in relation to G and M . The elements of G are defined as objects and the elements of M are defined as attributes of the context. In order to express that an object g ∈ G is related to I with the attribute m ∈ M , we record it as gIm or (g, m) ∈ I and read that object g has the attribute m. I is also defined as the context incidence relation. For a set A ⊆ G of objects we define A ′ = {m ∈ M | gIm f or all g ∈ A} (the set of attributes common to the objects in A). Correspondingly, for a set B ⊆ M of attributes, we define B ′ = {g ∈ G | gIm f or all m ∈ B} (the set of objects which have all attributes in B).</p><p>A formal concept of the context (G,</p><formula xml:id="formula_0">M, I) is a pair (A, B) with A ⊆ G, B ⊆ M , A ′ = B and B ′ = A.</formula><p>We call A the extent and B the intent of the concept (A, B). B(G, M, I) denotes the set of all concepts of context (G, M, I) and forms a complete lattice. For more details, see <ref type="bibr" target="#b1">[2]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">Attribute implication</head><p>Attribute implication (over set of attributes M) is an expression A ⇒ B, where A, B ⊆ M (A and B are sets of attributes). The implication can be read as: if an object has all attributes from A, then it also has all attributes from B and holds in the context (G,</p><formula xml:id="formula_1">M, I) if A ′ ⊆ B ′ . Pseudo-intent of formal context (G, M, I) is a set A of attributes which holds that A = A ′′ and B ′′ ⊆ A for each pseudo-intent B ⊂ A.</formula><p>We call a set T of attribute implications non-redundant, if no implication from T follows (see <ref type="bibr" target="#b3">[4]</ref> for details) from the rest of the set. Set T of attribute implications is true in formal context (G, M, I) if all implications from T hold in (G, M, I). Set T of implications is called sound and complete with respect to formal context (G, M, I) if T is true in (G, M, I) and each implication true in (G, M, I) follows from T . As base w.r.t. (G, M, I) we call the set of attribute implications, which is sound and complete (w.r.t. (G, M, I)) and non-redundant. The set T = {A ⇒ A ′′ | A is a pseudo-intent of (G, M, I)} is a complete, minimal and non-redundant set of implications and is called the Guigues-Duquenne basis (referred to below as GD).</p><p>Bases of implications are interesting to us, since they convey all the information contained in the data table in human-understandable form and they are as small as possible. More on GD bases can be found in <ref type="bibr" target="#b3">[4]</ref>.</p><p>Illustrative example As objects we can consider numbers from 1 to 10 and some basic properties of these numbers form attributes of these objects. Table containing this information can be used as formal context. The computed Guigues-Duquenne basis is presented below.</p><p>{composite, odd} ⇒ {composite, odd, square} {even, square} ⇒ {composite, even, square} {even, odd} ⇒ {composite, even, odd, prime, square} {composite, prime} ⇒ {composite, even, odd, prime, square} {odd, square} ⇒ {composite, even, odd, prime, square}</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3">Singular Value Decomposition</head><p>Singular value decomposition (SVD) is well-known because of its application in information retrieval as LSI. SVD is especially suitable in its variant for sparse matrices <ref type="bibr" target="#b4">[5]</ref>.</p><p>Theorem 1: Let A be an m×n rank-r matrix, σ 1 ≥ • • • ≥ σ r be the eigenvalues of a matrix √ AA T . Then there are orthogonal matrices U = (u 1 , . . . , u r ) and V = (v 1 , . . . , v r ), whose column vectors are orthonormal, and a diagonal matrix Σ = diag(σ 1 , . . . , σ r ). The decomposition A = U ΣV T is called singular value decomposition of matrix A and numbers σ 1 , . . . , σ r are singular values of the matrix A. Columns of U (or V ) are called left (or right) singular vectors of matrix A.</p><p>Because the singular values usually fall quickly, we can take only k greatest singular values and corresponding singular vector co-ordinates and create a kreduced singular decomposition of A. Let us have k, 0 &lt; k &lt; r and singular value decomposition of</p><formula xml:id="formula_2">A A = U ΣV T = (U k U 0 ) Σ k 0 0 Σ 0 V T k V T 0 We call A k = U k Σ k V T k a k-reduced singular value decomposition (rank-k SVD ). Theorem 2: (Eckart-Young) among all m × n matrices C of rank at most k A k is the one, that minimises ||A k − A|| 2 F = i,j (A i,j − C w,j ) 2 .</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.4">Non-negative Matrix Decomposition</head><p>Non-negative matrix factorisation differs from other rank reduction methods for vector space models in text mining by the use of constraints that produce non-negative basis vectors, which make possible the concept of a parts-based representation. <ref type="bibr" target="#b5">[6]</ref> first introduced the notion of parts-based representations for problems in image analysis or text mining that occupy non-negative subspaces in a vector-space model. Basis vectors contain no negative entries. This allows only additive combinations of the vectors to reproduce the original. NMF can be used to organise text collections into partitioned structures or clusters directly derived from the non-negative factors (see <ref type="bibr" target="#b7">[8]</ref>). Common approaches to NMF obtain an approximation of V by computing a (W, H) pair to minimise the Frobenius norm of the difference V − W H. Let V ∈ R m×n be a non-negative matrix and W ∈ R m×k and H ∈ R k×n for 0 &lt; k ≪ min(m, n). Then, the objective function or minimisation problem can be stated as min V − W H 2 with W ij &gt; 0 and H ij &gt; 0 for each i and j.</p><p>There are several methods for computing NMF. We have used the multiplicative method algorithm proposed by Lee and Seung <ref type="bibr" target="#b5">[6]</ref>, <ref type="bibr" target="#b6">[7]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Experiments</head><p>In our experiments we have focused on generating concept lattices and bases from original and reduced context and analysing the differences with respect to the results obtained using original context. Since used reduction methods generate non-binary data, simple rounding was used to obtain boolean matrices.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Concept lattices</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Concept lattice experiments were based on the formal context in Table 1 (see fig. 1 for corresponding lattice).</head><p>A0 A1 A2 A3 A4 A5 A6 A7 A8 A9 O0 0 0 0 0 0 0 0 0 0 1 O1 1 0 0 0 0 0 1 0 0 1 O2 0 0 0 0 0 0 1 0 0 1 O3 0 0 0 0 0 0 1 0 0 0 O4 1 0 0 0 0 0 1 0 0 0 O5 1 0 0 0 0 0 0 0 0 0 O6 0 0 1 1 0 0 1 1 0 0 O7 0 0 0 0 0 0 1 1 0 0 O8 0 0 1 1 0 0 0 0 0 0 O9 0 0 1 0 0 0 1 1 0 0 O10 0 0 0 1 0 0 1 1 0 0 O11 0 0 1 1 0 0 0 1 0 0 O12 0 0 1 1 0 0 1 0 0 0 O13 0 0 1 0 0 0 1 0 0 0 O14 0 0 1 0 0 0 0 1 0 0 O15 0 0 0 1 0 0 1 0 0 0 O16 0 0 0 1 0 0 0 1 0 0 O17 1 0 0 0 0 0 1 1 0 0 O18 0 0 1 1 0 0 0 0 0 1 O19 1 0 1 1 0 0 0 0 0 1 Table <ref type="table" target="#tab_0">1</ref>. Formal context A0 A1 A2 A3 A4 A5 A6 A7 A8 A9 O0 0 0 0 0 0 0 0 0 0 0 O1 1 0 0 0 0 0 1 0 0 0 O2 1 0 0 0 0 0 0 0 0 1 O3 1 0 0 0 0 0 0 0 0 1 O4 1 0 0 0 0 0 0 1 0 0 O5 0 0 0 0 0 0 0 1 0 0 O6 0 0 1 1 0 0 1 1 0 0 O7 1 0 0 0 0 0 0 1 0 0 O8 0 0 1 1 0 0 1 0 0 0 O9 0 0 1 1 0 0 1 1 0 0 O10 0 0 1 1 0 0 1 1 0 0 O11 0 0 1 1 0 0 1 0 0 0 O12 0 0 1 1 0 0 1 0 0 1 O13 1 0 1 1 0 0 0 0 0 1 O14 0 0 0 0 0 0 1 0 0 0 O15 1 0 1 1 0 0 1 0 0 1 O16 0 0 0 0 0 0 1 0 0 0 O17 0 0 0 0 0 0 0 1 0 0 O18 0 0 1 1 0 0 1 0 0 0 O19 0 0 1 1 0 0 1 1 0 0 Table <ref type="table">2</ref></p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>. Context after SVD reduction</head><p>In the following figures we can see that the node 3 from the original concept lattice was deleted, because the attributes composition (A6 and A9) in the objects (O1, O2) is not available, as after using SVD the attribute A6 was deleted from the object O2. The node 20 was deleted, too, because the attributes composition (A0 and A6) in the objects (O1, O4, O17) is not available, as after using SVD, the attribute A6 was deleted from the original objects (O4, O17) and the attribute A0 was deleted from object O17.</p><p>We can see also, that after use of SVD, some attributes are removed and added, and more objects have the same compositions of attributes. The node 11 has a composition of attributes (A2 and A6) in the objects (O6, O9, O12, O13); this composition of attributes (A2, A6) existed in the objects (O8, O10, O11, O18, O19), too. The node 6 has composition of attributes (A3 and A6) in the objects (O6, O10, O11, O16); this composition of attributes (A3, A6) existed in the objects (O8, O9, O11, O18, O19) too. From that, the nodes (11 and 6) are incorporated in new node <ref type="bibr" target="#b4">(5)</ref>, because all the attributes in the two nodes are in all of the objects in the two nodes. That means that the new node 5 has composition of attributes (A2, A3, A6) in the objects (O6, O8, O9, O10, O11,    <ref type="table">2</ref>) Fig. <ref type="figure">3</ref>. Concept lattice computed from NMF reduced formal context O12, O18, O19). Similar results can be obtained using NMF method. Consequent lattice is shown in fig. <ref type="figure">3</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Implication bases</head><p>Controlling size of the basis Controlling size of the basis In the first experiment, we generated random contexts (binary data table with sixty objects, fifteen attributes and several densities -25%, 50%, 75%). Then the matrix was reduced to a lower dimension by use of one of the methods mentioned. The Guigues-Duquenne basis was later computed and results compared against the basis computed from the original data. Size of input data has been selected after computation of several different samples with respect to computational tractability.</p><p>The following charts (Fig. <ref type="figure" target="#fig_2">4</ref>) illustrate the results of this experiment:the first row corresponds to reduction with the SVD method, the second one to the NMF method. In the figures on the left we present the decreasing number of implications in bases constructed from reduced contexts. Each curve corresponds to one of the aforementioned densities. While lowering the dimension of the data, we are surely losing a certain amount of information. The ratio of objects from the original context, which do not hold in the new basis, is shown in the figures on the right. The results were averaged among hundreds of samples.</p><p>Noise resistance Even one small change in source data can cause quite large changes to the GD basis. That can be a huge problem in noisy environments, so we have studied whether reduction into a lower dimension could be helpful. In the following, we suppose that the data contain redundancy and we know the number of rules contained in the data in advance. This situation is not uncommon in applications. Since this is so, we can lower the dimension of the formal context to the number of rules.</p><p>More precisely, we have taken several randomly-generated rules (using ten attributes) and combined them into tens of rows to create formal context. Then we put an amount of noise into the data. Later we reduced these datasets to a lower dimension (with the original number of rules used as rank), using SVD and NMF methods. In the last step we compared the GD bases computed from rank the original data with the bases from reduced contexts. The results were again averaged among hundreds of samples and fig. <ref type="figure" target="#fig_3">5</ref> comprises an illustrative chart of the results of this experiment.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Conclusion and further work</head><p>Concept lattices We can see that singular value decomposition used as the first, and non-negative matrix factorisation used as the second, practical approach, were successful and reduced original concept lattices. The number of concepts in reduced concept lattices is lower than in the case of original concept lattices. It implies that computation time of reduced lattices will be lower, and that is why reducing lattices can be useful.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Implication bases</head><p>We have seen that the size of the resulting implication basis can be smoothly controlled by reduction of the formal context. Our hypothesis is as follows: reduction of formal context to lower dimension with SVD or NMF can lead to faster computation of GD basis, while retaining the most important parts (most objects are still covered by the new basis). Noise resistance in basis construction can also be obtained by use of this method under usual conditions (redundancy, etc.). </p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Fig. 1 .</head><label>1</label><figDesc>Fig.1. Concept lattice computed from formal context (Table1)</figDesc><graphic coords="5,201.40,86.27,212.39,118.81" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Fig. 2 .</head><label>2</label><figDesc>Fig. 2. Concept lattice computed from SVD reduced formal context (Table 2) Fig. 3. Concept lattice computed from NMF reduced formal context</figDesc><graphic coords="6,341.52,65.84,141.70,175.52" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Fig. 4 .</head><label>4</label><figDesc>Fig.<ref type="bibr" target="#b3">4</ref>. Average base size and average ratio of uncovered objects for contexts with various densities. Reduced using SVD (first row), NMF (second row).</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Fig. 5 .</head><label>5</label><figDesc>Fig.5. Ratio of original implications found in bases with added noise (original -without preprocessing, using SVD preprocessing, using NMF preprocessing)</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 )</head><label>1</label><figDesc></figDesc><table><row><cell>Objects</cell><cell>Attrs.</cell></row><row><cell>0 0, 1, 2, 18, 19</cell><cell>9</cell></row><row><cell>1 6, 7, 9, 10, 11, 14, 16, 17</cell><cell>7</cell></row><row><cell cols="2">2 1-4, 6, 7, 9, 10, 12, 13, 15, 17 6</cell></row><row><cell>3 1, 2</cell><cell>6, 9</cell></row><row><cell>4 6, 7, 9, 10, 17</cell><cell>6, 7</cell></row><row><cell>5 6, 8, 10-12, 15, 16, 18, 19</cell><cell>3</cell></row><row><cell>6 6, 10, 11, 16</cell><cell>3, 7</cell></row><row><cell>7 6, 10, 12, 15</cell><cell>3, 6</cell></row><row><cell>8 6, 10</cell><cell>3, 6, 7</cell></row><row><cell>9 6, 8-14, 18, 19</cell><cell>2</cell></row><row><cell>10 6, 9, 11, 14</cell><cell>2, 7</cell></row><row><cell>11 6, 9, 12, 13</cell><cell>2, 6</cell></row><row><cell>12 6, 9</cell><cell>2, 6, 7</cell></row><row><cell>13 6, 8, 11, 12, 18, 19</cell><cell>2, 3</cell></row><row><cell>14 18, 19</cell><cell>2, 3, 9</cell></row><row><cell>15 6, 11</cell><cell>2, 3, 7</cell></row><row><cell>16 6, 12</cell><cell>2, 3, 6</cell></row><row><cell>17 6</cell><cell>2, 3, 6, 7</cell></row><row><cell>18 1, 4, 5, 17, 19</cell><cell>0</cell></row><row><cell>19 1, 19</cell><cell>0, 9</cell></row><row><cell>20 1, 4, 17</cell><cell>0, 6</cell></row><row><cell>21 1</cell><cell>0, 6, 9</cell></row><row><cell>22 17</cell><cell>0, 6, 7</cell></row><row><cell>23 19</cell><cell>0, 2, 3, 9</cell></row><row><cell>24</cell><cell>0-9</cell></row><row><cell>25 0-19</cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 3 .</head><label>3</label><figDesc>Formal concepts</figDesc><table><row><cell>Objects</cell><cell>Attrs.</cell></row><row><cell>0 2, 3, 12, 13, 15</cell><cell>9</cell></row><row><cell cols="2">1 1, 4-7, 9, 10, 17, 19 7</cell></row><row><cell cols="2">2 6, 8-12, 14, 16, 18, 19 6</cell></row><row><cell>3 6, 8-13, 15, 18, 19</cell><cell>3, 6</cell></row><row><cell>4 12, 13, 15</cell><cell>2, 3, 9</cell></row><row><cell>5 6, 8-12, 18, 19</cell><cell>2, 3, 6</cell></row><row><cell>6 12</cell><cell>2, 3, 6, 9</cell></row><row><cell>7 6, 9, 10, 19</cell><cell>2, 3, 6, 7</cell></row><row><cell>8 1, 2, 3, 4, 7, 13, 15</cell><cell>0</cell></row><row><cell>9 2, 3, 13, 15</cell><cell>0, 9</cell></row><row><cell>10 1, 4, 7</cell><cell>0, 7</cell></row><row><cell>11 13, 15</cell><cell>0, 2, 3, 9</cell></row><row><cell>12</cell><cell>0-9</cell></row><row><cell>13 0-19</cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 4 .</head><label>4</label><figDesc>Formal concepts after SVD reduction</figDesc><table /></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Indexing by latent semantic analysis</title>
		<author>
			<persName><forename type="first">S</forename><surname>Deerwester</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">T</forename><surname>Dumais</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">W</forename><surname>Furnas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">K</forename><surname>Landauer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Harshman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of the American Society for Information Science</title>
		<imprint>
			<biblScope unit="volume">41</biblScope>
			<biblScope unit="page" from="391" to="407" />
			<date type="published" when="1990">1990</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<author>
			<persName><forename type="first">B</forename><surname>Ganter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Wille</surname></persName>
		</author>
		<title level="m">Formal Concept Analysis: Mathematical Foundations</title>
				<meeting><address><addrLine>New York</addrLine></address></meeting>
		<imprint>
			<publisher>Springer-Verlag</publisher>
			<date type="published" when="1997">1997</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Lattices and data analysis: How to draw them using a computer</title>
		<author>
			<persName><forename type="first">R</forename><surname>Wille</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Algorithms and Order</title>
				<editor>
			<persName><forename type="first">I</forename></persName>
		</editor>
		<meeting><address><addrLine>Boston</addrLine></address></meeting>
		<imprint>
			<publisher>Kluwer</publisher>
			<date type="published" when="1989">1989</date>
			<biblScope unit="page" from="33" to="58" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Familles minimales dimplications informatives resultant dun tableau de donnees binaires Math</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">L</forename><surname>Guigues</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Duquenne</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Sci. Humaines</title>
		<imprint>
			<biblScope unit="volume">95</biblScope>
			<biblScope unit="page" from="5" to="18" />
			<date type="published" when="1986">1986</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Computation methods for intelligent information access</title>
		<author>
			<persName><forename type="first">T</forename><surname>Letsche</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Berry</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Dumais</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 1995 ACM/IEEE conference on Supercomputing</title>
				<meeting>the 1995 ACM/IEEE conference on Supercomputing<address><addrLine>New York, NY</addrLine></address></meeting>
		<imprint>
			<publisher>ACM Press</publisher>
			<date type="published" when="1995">1995</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Learning the parts of objects by non-negative matrix factorization</title>
		<author>
			<persName><forename type="first">D</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Seung</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Nature</title>
		<imprint>
			<biblScope unit="volume">401</biblScope>
			<biblScope unit="page" from="788" to="791" />
			<date type="published" when="1999">1999</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Algorithms for Non-Negative Matrix Factorization</title>
		<author>
			<persName><forename type="first">D</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Seung</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Advances in Neural Information Processing Systems</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="page" from="556" to="562" />
			<date type="published" when="2001">2001</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Document clustering using nonnegative matrix factorization</title>
		<author>
			<persName><forename type="first">F</forename><surname>Shahnaz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">W</forename><surname>Berry</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">P</forename><surname>Pauca</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">J</forename><surname>Plemmons</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Information Processing and Management</title>
		<imprint>
			<biblScope unit="volume">42</biblScope>
			<biblScope unit="page" from="373" to="386" />
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Scalability in Formal Concept Analysis</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">J</forename><surname>Cole</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">W</forename><surname>Eklund</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Computational Intelligence</title>
		<imprint>
			<biblScope unit="volume">15</biblScope>
			<biblScope unit="page" from="11" to="27" />
			<date type="published" when="1999">1999</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Complexity Reduction in Lattice-Based Information Retrieval</title>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">S</forename><surname>Cheung</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Vogel</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Information Retrieval</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="page" from="285" to="299" />
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
