<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Using Matrix Decompositions in Formal Concept Analysis</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Vaclav</forename><surname>Snasel</surname></persName>
							<email>vaclav.snasel@vsb.cz</email>
							<affiliation key="aff0">
								<orgName type="department" key="dep1">Dept. of Computer Science</orgName>
								<orgName type="department" key="dep2">Faculty of Electrical Engineering and Computer Science</orgName>
								<orgName type="institution">VŠB-Technical University of Ostrava</orgName>
								<address>
									<addrLine>17. listopadu 15</addrLine>
									<postCode>708 33</postCode>
									<settlement>Ostrava</settlement>
									<country key="CZ">Czech Republic</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Petr</forename><surname>Gajdos</surname></persName>
							<email>petr.gajdos@vsb.cz</email>
							<affiliation key="aff0">
								<orgName type="department" key="dep1">Dept. of Computer Science</orgName>
								<orgName type="department" key="dep2">Faculty of Electrical Engineering and Computer Science</orgName>
								<orgName type="institution">VŠB-Technical University of Ostrava</orgName>
								<address>
									<addrLine>17. listopadu 15</addrLine>
									<postCode>708 33</postCode>
									<settlement>Ostrava</settlement>
									<country key="CZ">Czech Republic</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Hussam</forename><forename type="middle">M</forename><surname>Dahwa Abdulla</surname></persName>
							<affiliation key="aff0">
								<orgName type="department" key="dep1">Dept. of Computer Science</orgName>
								<orgName type="department" key="dep2">Faculty of Electrical Engineering and Computer Science</orgName>
								<orgName type="institution">VŠB-Technical University of Ostrava</orgName>
								<address>
									<addrLine>17. listopadu 15</addrLine>
									<postCode>708 33</postCode>
									<settlement>Ostrava</settlement>
									<country key="CZ">Czech Republic</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Martin</forename><surname>Polovincak</surname></persName>
							<email>martin.polovincak.fei@vsb.cz</email>
							<affiliation key="aff0">
								<orgName type="department" key="dep1">Dept. of Computer Science</orgName>
								<orgName type="department" key="dep2">Faculty of Electrical Engineering and Computer Science</orgName>
								<orgName type="institution">VŠB-Technical University of Ostrava</orgName>
								<address>
									<addrLine>17. listopadu 15</addrLine>
									<postCode>708 33</postCode>
									<settlement>Ostrava</settlement>
									<country key="CZ">Czech Republic</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Using Matrix Decompositions in Formal Concept Analysis</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">28D1BA3113CC0C48BDE8C95630B812AD</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-23T21:10+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Formal Concept Analysis</term>
					<term>Singular Value Decomposition</term>
					<term>Semi Discrete Decomposition</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>One of the main problems connected with the formal concept analysis and lattice construction is the high complexity of algorithms which plays a significant role when computing all concepts from a huge incidence matrix. In some cases, we only need to compute some of them to test for common attributes. In our research we try to modify an incidence matrix using matrix decompositions, creating a new matrix with fewer dimensions as an input for some known algorithms for lattice construction. In this paper, we want to describe methods of matrix decompositions and their influence on the concept lattice.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>We are dealing with possible uses of matrix decompositions for reduction of concept lattice. These methods are well known in the area of information retrieval under the name Latent Semantic Indexing (LSI) or Latent Semantic Analysis (LSA) <ref type="bibr" target="#b1">[2]</ref>, <ref type="bibr" target="#b2">[3]</ref>. LSI and LSA have been used for discovery of latent dependencies between terms (or documents). We would like to apply this approach in the area of formal concept analysis (FCA) <ref type="bibr" target="#b4">[5]</ref>. First, we will introduce basic terms of Formal Concept Analysis (FCA) and then we will describe the rank-k SVD decomposition and the LSI method. Because the SVD decomposition presents some difficulty connected with the setting of its parameters, we will introduce a different approach based on semidiscrete lattice decomposition.</p><p>Small examples will illustrate our approaches. We hope that this is the way to use FCA on large data collections and visualize a data structure via concept lattice. Note that usage of this method depends on concrete application, because it makes some noise in the lattice structure to make it simpler.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Background</head><p>In the following paragraphs we would like to introduce important basic terms of formal concept analysis, singular value decomposition and semi-discrete value decomposition. Pros and cons of matrix decompositions will be discussed later.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">Formal Concept Analysis</head><p>Definition 1. <ref type="bibr" target="#b8">[9]</ref> A formal context C = G, M , I (</p><p>) consists of two sets; G and M, I is a subset of GxM. The elements of G are defined as objects and the elements of M are defined as attributes of the context. In order to express that an object g ∈ G is related to I with the attribute m M ∈ , we record it as gIm or g,m ( ) ∈ I and read it as "the object g has the attribute m". I is also defined as the context incidence relation.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Definition 2. [9] For A ⊂ G as set of objects we use the definition</head><formula xml:id="formula_0">′ A = m ∈ M gIm for all g ∈ A {</formula><p>} (the set of attributes common to the objects in A).</p><p>Correspondingly, for B attributes we use the definition</p><formula xml:id="formula_1">′ B = g ∈ G gIm for all m ∈ B {</formula><p>} (the set of objects which have all attributes in B).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Definition 3. [9] A formal concept of the context (G, M, I) is the pair (A, B) with A ⊆ G , B ⊆ M , and ′ B = A. We call A the extent and B the intent of the concept (A, B). β(G, M, I) denotes the set of all concepts of context (G, M, I).</head><p>Definition 4. <ref type="bibr" target="#b8">[9]</ref> The concept lattice β (G,M, I) is a complete lattice in which infimum and supremum are given by ) )" ( , ( ) , (</p><formula xml:id="formula_2">U I I T t t T t t T t t t B A B A ∈ ∈ ∈ = ) , )" (( ) , ( I U U T t t T t t T t t t B A B A ∈ ∈ ∈ = 2.</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Singular Value Decomposition</head><p>Singular value decomposition (SVD) is well known because of its application in information retrieval as LSI. SVD is especially suitable in its variant for sparse matrices. <ref type="bibr" target="#b2">[3]</ref> Theorem 1. (Singular Value Decomposition) Let A be an m × n rank-r matrix. Be σ 1 Kσ r eigenvalues of a matrix AA T . There exist orthogonal matrices U = (u 1 , . . . , u r ) and V = (v 1 , . . . , v r ) , whose column vectors are orthonormal, and diagonal matrix ∑ = diag σ 1 , ..., σ r (</p><p>) . The decomposition A = UΣV T is referred to as a singular value decomposition of matrix A and numbers σ 1 Kσ r are singular values of the matrix A. Columns of U (or V) are referred to as left (or right) singular vectors of matrix A. Now we have a decomposition of the original matrix A. Needless to say, the left and right singular vectors are not sparse. We have at most r nonzero singular numbers, where rank-r is the min(m,n). Because the singular values usually decrease quickly, we only need to take k greatest singular values and corresponding singular vector coordinates and create a k-reduced singular decomposition of matrix A.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Definition 5. Let us have k, 0 &lt; k &lt; r and singular value decomposition of</head><formula xml:id="formula_3">A A = UΣV T = (U K U 0 ) Σ K 0 0 Σ 0 ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ V K T V 0 T ⎛ ⎝ ⎜ ⎞ ⎠ ⎟ A = U k Σ k V k</formula><p>T is referred to as a k-reduced singular value decomposition (k-rank SVD).</p><p>In information retrieval, if every document is relevant to only one topic, we obtain a latent semantics -semantically related words and documents will have similar vectors in the reduced space. For an illustration of rank-k SVD see &lt; 1, the grey areas determine first k coordinates from singular vectors, which are being used. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>(Eckart-Young). Among all m×n matrices C of rank at most k, A k is the one that minimizes ||</head><formula xml:id="formula_4">A k − A || F 2 = (A i, j − C w, j ) 2 i, j ∑ .</formula><p>Because rank-k SVD is the best rank-k approximation of original matrix A, any other decomposition will increase the sum of squares of matrix A − A k .</p><p>The SVD is hard to compute and once computed, it reflects only the decomposition of the original matrix. The recalculation of SVD is expensive, so it is impossible to recalculate SVD every time new rows or columns are inserted. The SVD-Updating is a partial solution, but since the error increases slightly with inserted rows and columns when updates occur frequently, the recalculation of SVD may be needed.</p><p>Note: From now on, we will assume rank-k singular value decomposition when speaking about SVD.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">Semi Discrete Matrix Decomposition (SDD)</head><p>We have used another alternative decomposition in our research, semi discrete decomposition (SDD). For equal data set and incidence matrix, the SDD does as well as the SVD and uses less than one-tenth the storage space. In this section, we briefly overview SDD and then we will describe its purpose.</p><p>Semi discrete decomposition approximates a matrix as a weighted sum of outer product formed by vectors with entries constrained to be in the set S = {−1, 0, 1}. O'Leary and Peleg introduced the SDD in the context of image compression <ref type="bibr" target="#b7">[8]</ref>, and Kolda and O'Leary used the SDD for latent semantic indexing LSI in information retrieval <ref type="bibr" target="#b5">[6]</ref>, <ref type="bibr" target="#b6">[7]</ref>.</p><p>The primary advantage of the SDD over other types of matrix decompositions such as the truncated SVD is that it typically provides a more accurate approximation for far less storage. An SDD of an m × n matrix A as a decomposition is shown in figure <ref type="figure">2</ref>. Here each x i is an m − vector with entries from the set S = {−1, 0, 1}, each y i </p><formula xml:id="formula_5">T T k k T k k d Y d Y A x x x d Y ⎡ ⎤ ⎡ ⎤ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎣ ⎦ ⎣ ⎦ Fig. 2. k-reduced semi discrete decomposition</formula><p>is an n−vector with entries from the set S, and each d i is a positive scalar. We call this k − term SDD. Note: From now on, we will assume rank-k semi discrete decomposition when speaking about SDD.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.4">Nonnegative Matrix Factorization (NMF)</head><p>Nonnegative matrix factorization differs from other rank reduction methods for vector space models in text mining, e.g. principal component analysis (PCA) or vector quantization (VQ) due to use of constraints that produce nonnegative basis vectors, which make possible the concept of a parts-based representation.</p><p>Lee and Seung <ref type="bibr">[11]</ref> first introduced the notion of parts-based representations for problems in image analysis or text mining that occupies nonnegative subspaces in a vector-space model. Techniques like PCA and VQ also generate basis vectors various additive and subtractive combinations of which can be used to reconstruct the original space. But the basis vectors for PCA and VQ contain negative entries and cannot be directly related to the original vector space to derive meaningful interpretations. In the case of NMF, the basis vectors contain no negative entries, allowing only additive combinations of the vectors to reproduce the original. So the perception of the whole, be it an image or a document in a collection, becomes a combination of its parts represented by these basis vectors. In text mining, the vectors represent or identify semantic features, i.e. a set of words denoting a particular concept or topic. If a document is viewed as a combination of basis vectors, then it can be categorized as belonging to the topic represented by its principal vector. Thus, NMF can be used to organize text collections into partitioned structures or clusters directly derived from the nonnegative factors <ref type="bibr" target="#b9">[10]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Our Experiment</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Input Data</head><p>Data from the Ostrava Mass Transit Authority was used in this experiment. We can describe data in a graph. Rows represent tram stops, columns represent tram lines. Let 1 represent a tram stopping at a tram stop in the appropriate row and column crossing, otherwise zero. This data was chosen because we can easily see if the SVD is working. Many tram lines share the same tram stops. These parts of tram lines should be minimized with SVD and it should also be visible in the lattice after using SVD. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Steps of Our Experiment</head><p>Our experiment included several steps. The first step was to read and transform data into adjacency matrix. After transforming and computing (using SVD) three new matrixes were obtained as results from SVD A = U Σ V T . After choosing rank-k, a new matrix was computed.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3">Reading and Transforming Data</head><p>Data was obtained from tram timetables. The first ten tram lines and their tram stops were chosen. The transformed matrix had 92 rows and 10 columns. Transformation was simple. If a tram stops at a particular tram stop, the number 1 appeared in the appropriate row and column crossing, otherwise zero. The matrix is also an incidence matrix and was used in SVD, NMF and FCA computations.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4">Using SVD</head><p>SVD computations were made by SVDLIBC-fix software. This software is free and is written in C language. An incidence matrix acts as the input. Software can compute all three matrixes -U and Σ and V T . k-rank was chosen k = 4, that is mean, matrixes will be U(92x10), Σ(10x10) and V T (10x10).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.5">Computing Concept Lattices</head><p>Both incidence matrix and product of SVD computing were used to compute concept lattice. Both matrixes passed as input for Concept Lattice software developed at VSB-TU Ostrava. The Concept Lattice software produced a concept lattice. The view of these concept lattices were completed with GraphPlace software. GraphPlace's output is a Postscript file that can be viewed in any postscript viewer.</p><p>Concept lattice was computed from the output matrix. Man can see the difference between the concept lattice computed from the original matrix and the concept lattice computed from the output matrix using SVD. (g, h) appear in the other nodes at the same level. For that node 11 is lost with node 10, which has lost all connections with other nodes. Example 3. Node 34 (figure <ref type="figure">7</ref>): Node 34 has attributes (a, i) in lines <ref type="bibr" target="#b7">(8,</ref><ref type="bibr" target="#b8">9,</ref><ref type="bibr">11,</ref><ref type="bibr">16,</ref><ref type="bibr">17,</ref><ref type="bibr">18,</ref><ref type="bibr">19,</ref><ref type="bibr">20,</ref><ref type="bibr">21)</ref>. After the SVD attribute (i) from lines (16, 17, 18, 19, 20, 21) is lost and only attribute (a) remains. Attribute (a) appears in other nodes at the same level (node 38), and lines <ref type="bibr" target="#b7">(8,</ref><ref type="bibr" target="#b8">9,</ref><ref type="bibr">11)</ref> appear in other nodes at the same level (node 38). Also, node 8 has attributes (f, i) in lines <ref type="bibr" target="#b7">(8,</ref><ref type="bibr" target="#b8">9,</ref><ref type="bibr">11,</ref><ref type="bibr">79)</ref>, after SVD we lose the attribute (i) from line (79) and only attribute (f) remains, that appears in other node at the same level (node 38), and lines <ref type="bibr" target="#b7">(8,</ref><ref type="bibr" target="#b8">9,</ref><ref type="bibr">11)</ref> appear in other node at the same level (node 38). For these nodes 34 and 8 are after SVD lost. Loosing nodes (34 and 8) means also loosing connection with node 1 who has attribute (i), and we loose also node 1.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusion</head><p>Every node in the concept has a distinct set of attributes. That means that two nodes can not be found with the same set of attributes. Experiment consists in finding changes of objects and their set of attributes after using SVD. SVD reduces some attributes from the original matrix because these attributes do not appear in many objects. Attributes are not primary and they can be lost without any problem.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Fig. 1</head><label>1</label><figDesc>Fig. 1. k-reduced singular value decomposition Theorem 2.(Eckart-Young). Among all m×n matrices C of rank at most k, A k is the one that minimizes ||A k − A || F 2 = (A i, j − C w, j ) 2</figDesc><graphic coords="3,193.20,381.66,209.04,81.66" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head></head><label></label><figDesc>... ... ... ... 0 0 ...</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Fig. 3 .</head><label>3</label><figDesc>Fig. 3. Original lattice</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Fig. 4 .Fig. 5 .</head><label>45</label><figDesc>Fig. 4. Lattice after using SVD</figDesc><graphic coords="6,200.10,352.38,206.85,114.66" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Fig. 6 .</head><label>6</label><figDesc>Fig. 6. Example 3</figDesc></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Algorithms for drawing graphs, an annotated bibiliography</title>
		<author>
			<persName><forename type="first">G</forename><surname>Battista</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Eades</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Tamassia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Tollis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Computational Geometry, Theory and Applications</title>
		<imprint>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="page" from="235" to="282" />
			<date type="published" when="1994">1994</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Understanding Search Engines</title>
		<author>
			<persName><forename type="first">M</forename><surname>Berry</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Browne</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Mathematical Modeling and Text Retrieval</title>
				<meeting><address><addrLine>Siam</addrLine></address></meeting>
		<imprint>
			<date type="published" when="1999">1999</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Computation Methods for Intelligent Information Access</title>
		<author>
			<persName><forename type="first">M</forename><surname>Berry</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Dumais</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Letsche</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 1995 ACM/IEEE Supercomputing Conference</title>
				<meeting>the 1995 ACM/IEEE Supercomputing Conference</meeting>
		<imprint>
			<date type="published" when="1995">1995</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Concept lattice generation by singular value decomposition</title>
		<author>
			<persName><forename type="first">P</forename><surname>Gajdoš</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Moravec</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">CLA</title>
		<imprint>
			<biblScope unit="page" from="13" to="22" />
			<date type="published" when="2004">2004. 2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<title level="m" type="main">Formal Concept Analysis</title>
		<author>
			<persName><forename type="first">B</forename><surname>Ganter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Wille</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1999">1999</date>
			<publisher>Springer-Verlag</publisher>
			<pubPlace>Berlin Heidelberg</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<title level="m" type="main">Computation and uses of the semidiscrete matrix decomposition</title>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">G</forename><surname>Kolda</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">P</forename><surname>O'leary</surname></persName>
		</author>
		<idno>CS-TR-4012</idno>
		<imprint>
			<date type="published" when="1999">1999</date>
			<biblScope unit="page" from="8" to="16" />
		</imprint>
		<respStmt>
			<orgName>Oak Ridge National Laboratory</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Technical Report</note>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">A semidiscrete matrix decomposition for latent semantic indexing information retrieval</title>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">G</forename><surname>Kolda</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">P</forename><surname>O'leary</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM Transactions on Information Systems</title>
		<imprint>
			<biblScope unit="volume">16</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="322" to="346" />
			<date type="published" when="1998">1998</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Digital image compression by outer product expansion</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">P</forename><surname>O'leary</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Peleg</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Communications</title>
		<imprint>
			<biblScope unit="volume">31</biblScope>
			<biblScope unit="page" from="441" to="444" />
			<date type="published" when="1983">1983</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Lattices in data analysis: How to draw them with a computer</title>
		<author>
			<persName><forename type="first">R</forename><surname>Wille</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Algorithms and Order</title>
		<imprint>
			<biblScope unit="page" from="33" to="58" />
			<date type="published" when="1989">1989</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Document clustering using nonnegative matrix factorization</title>
		<author>
			<persName><forename type="first">Farial</forename><surname>Shahnaz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Michael</forename><forename type="middle">W</forename><surname>Berry</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">Paul</forename><surname>Pauca</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Robert</forename><forename type="middle">J</forename><surname>Plemmons</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Inf. Process. Manage</title>
		<imprint>
			<biblScope unit="volume">42</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="373" to="386" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Algorithms for non-negative matrix factorization</title>
		<author>
			<persName><forename type="first">D</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">"</forename><forename type="middle">H</forename><surname>Seung</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Advances in Neural Information Processing Systems</title>
				<imprint>
			<date type="published" when="2001">2001</date>
			<biblScope unit="page" from="556" to="562" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">The approximation of one matrix by another of lower rank</title>
		<author>
			<persName><forename type="first">G</forename><surname>Young</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Eckart</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Psychometrika</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="211" to="218" />
			<date type="published" when="1936">1936</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
