<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Geometrical approach for modeling semantics in linguistics</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Milan</forename><surname>Gudába</surname></persName>
							<email>gudaba@gmail.com</email>
							<affiliation key="aff0">
								<orgName type="department" key="dep1">Department of informatics</orgName>
								<orgName type="department" key="dep2">FPV</orgName>
								<orgName type="institution">University of Saint Cyril and Methodius</orgName>
								<address>
									<addrLine>Nám. J. Herdu 2</addrLine>
									<postCode>917 01</postCode>
									<settlement>Trnava</settlement>
									<country key="SK">Slovakia</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Stanislav</forename><surname>Horal</surname></persName>
							<email>stanislav.horal@ucm.sk</email>
							<affiliation key="aff0">
								<orgName type="department" key="dep1">Department of informatics</orgName>
								<orgName type="department" key="dep2">FPV</orgName>
								<orgName type="institution">University of Saint Cyril and Methodius</orgName>
								<address>
									<addrLine>Nám. J. Herdu 2</addrLine>
									<postCode>917 01</postCode>
									<settlement>Trnava</settlement>
									<country key="SK">Slovakia</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Ladislav</forename><surname>Izakovič</surname></persName>
							<email>izakovil@ucm.sk</email>
							<affiliation key="aff0">
								<orgName type="department" key="dep1">Department of informatics</orgName>
								<orgName type="department" key="dep2">FPV</orgName>
								<orgName type="institution">University of Saint Cyril and Methodius</orgName>
								<address>
									<addrLine>Nám. J. Herdu 2</addrLine>
									<postCode>917 01</postCode>
									<settlement>Trnava</settlement>
									<country key="SK">Slovakia</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Michaela</forename><surname>Kalinová</surname></persName>
							<email>michaela.kalinova@ucm.sk</email>
							<affiliation key="aff0">
								<orgName type="department" key="dep1">Department of informatics</orgName>
								<orgName type="department" key="dep2">FPV</orgName>
								<orgName type="institution">University of Saint Cyril and Methodius</orgName>
								<address>
									<addrLine>Nám. J. Herdu 2</addrLine>
									<postCode>917 01</postCode>
									<settlement>Trnava</settlement>
									<country key="SK">Slovakia</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Václav</forename><surname>Snášel</surname></persName>
							<email>vaclav.snasel@vsb.cz</email>
							<affiliation key="aff1">
								<orgName type="department" key="dep1">Department of informatics</orgName>
								<orgName type="department" key="dep2">FEI</orgName>
								<orgName type="institution">VŠB -Technical University of Ostrava</orgName>
								<address>
									<addrLine>17. listopadu 15, 708 33</addrLine>
									<settlement>Ostrava-Poruba</settlement>
									<country key="CZ">Czech republic</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Geometrical approach for modeling semantics in linguistics</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">81F9229FF76E8BEF3107831533F0BB44</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-25T02:43+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The information is at the present time often saved and available in electronic form. With still increasing quantity of accessible, most frequently text information, the need of organization of these data is raising. The problem of fast and effective information retrieval occurs very often. In this contribution we describe the method for creating word vector space and using NOT operation for more effective acquirement of relevant documents.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>High volume of text documents and the rate of growth these documents requires finding new approaches in linguistics and in areas related with information retrieval (IR). These new approaches are based on principles which are derived from natural sciences.</p><p>Noticeable expansion of geometrics was motivated by Descartes ideas and by establishment of the coordinate system which permit interconnection between geometrics and algebra <ref type="bibr" target="#b11">[12,</ref><ref type="bibr" target="#b12">13]</ref>.</p><p>In IR was geometrical methods enveloped in the form of the vector model. Another meaningful step in geometrical understanding of the world was interconnection of geometrics and logics. This connection was made on the ground of the quantum physics <ref type="bibr" target="#b8">[9,</ref><ref type="bibr" target="#b11">12]</ref>.</p><p>A huge amount of multimedia data is at the present time coupled with expansion of information technologies. Among these data we can include especially text, image and acoustic documents. The set of text documents we will be consider as an input area. Almost in all well-known information retrieval systems occurs the morphological part. By the help of this part we can, by using stop-list, remove non-semantic words from documents, and semantic significant words convert to the basic form. In this way, we specify terms which after evaluation make vector in the space of concepts. This vector is then used for documents identification from the point of view his content <ref type="bibr" target="#b15">[16]</ref>.</p><p>The application of the method based on combination of the vector model and Boolean logic appears as a suitable way for creating the model of natural language. In this contribution we present constructions in the vector space based on the standard linear algebra, and also some examples of using the vector negation for separation meanings of ambiguous words. In quantum logic arbitrary sets are substituted by linear subspaces of the vector space and union, intersection and complement are substituted by the vector sum, intersection and orthogonal complements of these subspaces.</p><p>A useful tool for information retrieving and processing is latent semantic analysis -LSA. This method, which is based on singular value decomposition (SVD), we can use for improving access to desirable documents. We regard the factor of greatest singular values k. LSA has geometrical representation, in which objects (e.g. documents and terms) are distributed in the low-dimensional space. As an example we can use term-document matrix, in which rows represent terms and columns of matrix represent documents. Nonzero values in matrix signify that corresponding documents include required terms. This vector space model was described by Salton <ref type="bibr" target="#b9">[10]</ref>.</p><p>In the first part of our contribution is described representation of word meanings in the vector space. Second part is focused on operations in the word space. Next chapter is devoted to basic knowledge from Boolean logic. In the last part we mentioned theoretical knowledge applied in the process of searching word meanings.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Word meaning representation in vector space</head><p>Vector space could be understood as a set of points, in which each point of the space is defined by the list of coordinates <ref type="bibr" target="#b4">[5]</ref>. Two points can be accountable by adding theirs coordinates and each point can be multiplied by the scalar (in this paper scalars are real numbers, therefore all our vector spaces are "real" vector spaces). The first linguistic examples of vector spaces were developed for information retrieval <ref type="bibr" target="#b9">[10]</ref>. By accounting occurrences of each word in the each document we get term-document matrix. Each couple i,j in matrix indicates number how many times was the word w i occurred in the document D j . Then rows of the matrix can be understood as word-vectors. Dimensions of this vector space (number of coordinates given to each word) are therefore equal to the number of documents in collection. Document vectors are generated by calculating (weighted) sum of wordvectors of words occurred in given document.</p><p>Similar techniques are used in information retrieval for determination similarity relation between words and documents. Similarity can be determined by calculating cosine of the angle between two vectors <ref type="bibr" target="#b9">[10]</ref>, where w i , d i are coordinates of the vectors w and d, d w ⋅ is inner product w and d. ||w|| is length of the vector w <ref type="bibr" target="#b4">[5]</ref>.</p><formula xml:id="formula_0">d w d w d w d w d w sim i i i i ⋅ = = ∑ ∑ ∑ 2 2 ) , (</formula><p>The calculus is further simplified by normalization of all vectors to the same length, consequently then the cosine similarity is equal with euclidean inner product. This is standard method which avoids add great weight of consequence to frequent words or large documents. Normalized vectors was used in all models and experiments described in this contribution.</p><p>This structure can by used for determination similarities between pairs of words -two words will have high similarity, if they will be situated in same documents and only seldom is one word occurred without another. Some words are combined into combined query statements by using the commutative vector sum.</p><p>The term-document matrices are typically very sparse. Information could be concentrated in low number of dimensions when we use singular values from decomposition, transformation each word into n-dimensional subspace. This guarantees method of least squares. Each word is then represented by using n most significant latent variables. This process is called latent semantic analysis -LSA <ref type="bibr" target="#b5">[6]</ref>. Especially for these purposes of determining semantic similarity between words was made by Schütze one variant of LSA <ref type="bibr" target="#b10">[11]</ref>. Instead of using documents as columns in matrix, there were used content-bearing words. Consequently, in our case is vector of the word koruna (crown) defined upon that, that it was frequently occurred by the words cena (price) and mena (currency). This method is convenient for semantic tasks, like is clustering words according to similar meaning and determination disambiguation of words <ref type="bibr" target="#b6">[7,</ref><ref type="bibr" target="#b7">8]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Logical operations in word space</head><p>At investigation dependencies of words in the word space we can use logical operations, mainly primarily negation in relations of orthogonality and disjunction in relations of the vector sum of subspaces.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Vector negation</head><p>We would like to model meaning of expression "koruna NOT klenot" (koruna -crown as a coin, klenot -crown as a jewel) in a such way, that system will be able to awake, that we are interested in finances, but not about meaning of the word koruna in the sense of the jewel. Therefore we need to find aspects of meaning the word koruna which are different from the word koruna as a jewel, and have no relation to this word. Word meanings have not interrelationship if they have not any common marks. Document is considered as irrelevant for user if the inner product with user query is equal to zero, when query vector and document vector are orthogonal <ref type="bibr" target="#b4">[5]</ref>.  <ref type="bibr" target="#b14">[15]</ref>. Definition 2. Let V is a vector space with inner product. Then we could define for vector subspace A⊆V orthogonal subspace A┴ <ref type="bibr" target="#b14">[15]</ref> A┴ ≡ {v ∈V: ∀a∈ A, a • v = 0}. These definitions can be used for realization calculations with vectors in vector space. We apply standard method of projection. Example: When we are doing inner product with b, then we obtain</p><formula xml:id="formula_1">( ) 0 ) )( ( 2 = ⋅ ⋅ ⋅ − ⋅ =         ⋅ ⋅ − = ⋅ b b b b b a b a b b b a a b b NOT a</formula><p>This proves that a NOT b and b are orthogonal, therefore vector a NOT b is certainly part of a, that is irrelevant to b (Definition 1), as we required.</p><p>When we have normalized vectors, then Theorem 1 has following form</p><formula xml:id="formula_2">( )b b a a b NOT a ⋅ − = .</formula><p>For the purpose of finding expressions or documents, which correspondent to a NOT b, is not important for each candidate from a as well as b consequently determine certain differences. Theorem 1 shows that finding similarity between another vector and a NOT b is simple calculus of inner product.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Quantum logic and vector space</head><p>With concept of quantum logic we met for the first time in the theory of the quantum mechanic, which was presented by <ref type="bibr" target="#b1">Birkhoff and von Neumann (1936)</ref>  <ref type="bibr" target="#b1">[2]</ref>. From the set theory is known, that if we have sets A and B and the element A a ∈ or B a ∈ , then also union of these sets C= B A ∪ will contain this element. But the quantum logic does not describe A and B as sets, but as subspaces of the vector space.</p><p>The structure of the quantum logic is simple and we can obtain it by substitution of sets and subspaces by vector spaces and subspaces <ref type="bibr" target="#b1">[2]</ref>. Points in quantum mechanics are represented as subspaces of the vector space V. In this connection we can consider the collection of subspaces L(V) in the vector space V. The lower bound of ) and to define logic on L(V).</p><formula xml:id="formula_3">) ( , V L B A ∈ is the biggest element ) (V L C ∈ , where A C ⊆ and B C ⊆ , what</formula><p>The important piece of knowledge is also that every subspace ) (V L A ∈ can be identified (by using scalar product) with special projective map A V P A → : and through this bijection is logic of subspaces L(V) equivalent to logic of projection mapping in the vector space V.</p><p>The quantum logic is distinguished from Boolean logic at least in two properties: quantum logic is not distributive as well as commutative.</p><p>The disjunction in set theory can be modeled as union of sets, which corresponds in linear algebra to the vector sum of subspaces, where A+B is the smallest subspace of V containing A as well as B.</p><p>For determination of similarity between arbitrary objects is necessary to define some function σ: D × D → R. These functions assign a real number to pair of objects o i, o j from their domain area D. This formula will be a measure of similarity relation of objects, which must satisfy following requests: From the look of quantum physic, we can use P B to measure a probability that any element was found in some state <ref type="bibr" target="#b11">[12]</ref>. The value ( )</p><formula xml:id="formula_4">1. σ (o i , o j ) ≥ 0 2. σ (o i , o j ) = σ (o j , o i ) , i.</formula><formula xml:id="formula_5">a P a B ⋅</formula><p>is interpreted as a measured probability. For our purposes we define probability with following relation</p><formula xml:id="formula_6">( ) ( ) a P a B a sim B ⋅ = ,</formula><p>, where probability is given by scalar product a with projection a to the subspace B, from which we calculate value of term a lying in subspace B. Problematic similarity relation was searched from various looks, see <ref type="bibr" target="#b0">[1]</ref>, <ref type="bibr" target="#b11">[12]</ref>.</p><p>In practice, if the set {b j } is orthonormal then it is not correct only to calculate sim(a,b j ) for every vector b j in order. For obtaining an orthonormal base { } j b ~ for subspace B is convenient firstly construct orthonormal base for B by in practice used Gram-Schmidt method <ref type="bibr" target="#b4">[5]</ref>.</p><p>( ) ( )</p><formula xml:id="formula_7">∑ ⋅ = j j j B b b a a P ~</formula><p>Consequently, we can write ( ) ( )</p><formula xml:id="formula_8">∑ ⋅ = j j b a B a sim , .</formula><p>For enumeration sim(a,B) we need to calculate a scalar product a with every vector j b ~. This similarity relation is more difficult to calculate as in Theorem 1.</p><p>The result which we reached by comparing every document a NOT b using only one operation -scalar product, is the loss for disjunction, although how we will show later, but is desirable for negated disjunction.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Using the negation for search meanings</head><p>In this part is presented an introductory example of vector connections which demonstrate usage of vector negation and vector conjunction, vector disjunction and negation together for finding vectors which represent different meanings of ambiguous words. We describe shortly a document of obtained experiment. It shows that vector negation has smart contribution contrary of classic Boolean method which was described in paper <ref type="bibr" target="#b14">[15]</ref>. Our word space was constructed from 28 articles written in year 2006 which was obtained from the Internet. The total number of acquired words was 5260. The collection of articles was focused on economy, culture, sport, health and science and from every sphere was processed at least two articles. Documents which concern meanings of the word koruna (crown as coin) are marked as D13, D14, D15. On the other hand, documents related to koruna (crown as jewel) are marked D10, D11, D Over data source was created parser which separated individual words from the text. By using morphological analyzer <ref type="bibr" target="#b3">[4]</ref> was consequential constructed a list of terms. We assume that in articles occurred meanings of ambiguous words. For example, word koruna (crown as coin) is used more frequently in economic context as in context common with jewels. For testing the effectiveness of our operator of negation we will try to find less common meanings of chosen words which are related with prevailing expression.  The present experiments represent a calculation of similarity in relationship term-document for different values of k in area &lt;2, 15&gt;. Here we illustrate results for factor value k=2, k=8 and k=15.</p><p>The data in Table <ref type="table" target="#tab_0">1</ref> represent that vector negation is very effective for selection of relevant documents which correspond to the required word koruna (crown) and left out the word klenot (jewel).</p><p>LSI regards k greatest singular values. The choice of k have to be enough small for obtaining faster access to documents, but enough great for adequate interception of structure of the corpus.  We realized a decomposition of matrix A for different values of k. The most relevant documents were obtained for value of factor k=15. On the contrary, for the very small k=2 is obtained great volume of documents. It reduces their relevance in regard to required document.</p><p>The vector negation and disjunction can be combined with selection of some searched query from areas of documents. We do not negate only one argument, but several. If the user determines that he wants documents related with a but not with b 1 ,b 2 ,...,b n , it will be interpreted (without next indication) that he wants only documents witch are not related with unwanted terms b i . In this way, the next expression +λ n b n, λ i ∈ R}. This term can be transformed on definite vector which is orthogonal to all irrelevant arguments {b j }. This vector will be a -P B (a), where P B is projection on the subspace B as in Theorem 1. This implies that calculus of the similarity between all vectors with term a NOT (b 1 OR b 2 ... OR b n ) is the same as simple scalar product which has the same computing effectiveness as the Theorem 1. This technique is assigned to systematic reduction of irrelevant terms.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6">Conclusion and future work</head><p>The negation in the vector space is suitable tool for dimension reduction of required documents. The specification of searched documents can be realized by using several disjunction operations in query. Our word space consisted of 5260 words. More relevant results can be obtained by application this method for largest document database.</p><p>Actual experiments represent calculus of similarity in the relation term-document. By construction of the vector space model we have represented individual occurrences in documents through Boolean function, i.e. we have expressed attendance if you like absence of the given term in the document.</p><p>Contemporary experiments will be in future carried out over lexical database of words and the word connections in the WordNet, where are recorded relative lexical and semantic relations between individual contained words or concepts.</p><p>In the next experiments we target our effort to calculation of similarity in relation term-term by various forms of representation the weight of individual term in the document.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Definition 1 .</head><label>1</label><figDesc>Two words a and b are considered as irrelevant to each other, if their vectors are orthogonal, i.e. a a b are irrelevant to each other, if 0 = ⋅ b a</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head></head><label></label><figDesc>If A and B are subspaces of the space V, then NOT B represent B┴ and A NOT B represent projection A into B┴. When a, b belongs to V, then a NOT b represent projection a into &lt;b&gt;┴, where &lt;b&gt; is the subspace {λb : λ∈R}.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Theorem 1 .</head><label>1</label><figDesc>Let a, b are subsets of V. Then a NOT b is represented by the vector b [15].</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>A</head><label></label><figDesc>These two operations give partially formed set L(V), which is structure of the lattice. If we work in the area of scalar product, we can define for each subspace ) we have three operations which we use in collection of L(V) and are defined as follows [2It is simple to prove, that these three operations on L(V) are sufficed to realize any necessary relations (</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Definition 3 .</head><label>3</label><figDesc>e. remaining of symmetry 3. when o i = o j , than σ (o j , o i ) = max σ (o k , o l ); for ∀ o k , o l ∈ D Let terms b 1 ... b n ∈ V. Term b 1 OR ... OR b n is represented by thesubspace [15] similarity relation between an individual term a and a general subspace B, is more complicated as a search of similarity relation between individual terms.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_7"><head>a</head><label></label><figDesc>AND (NOT b 1 ) AND (NOT b 2 ) ... AND (NOT b n ) will pass into form a NOT (b 1 OR b 2 ... OR b n ). By using Definition 3 we will form a disjunction b 1 OR b 2 ... OR b n as vector subspace B = {λ 1 b 1 + ...</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 .</head><label>1</label><figDesc>Example of the influence of the factor k on the relevance of documents.</figDesc><table /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2 .</head><label>2</label><figDesc>Example of the influence of the operation NOT and the influence of the factor k on the relevance of documents.</figDesc><table /></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Podobnost a její modelování</title>
		<author>
			<persName><forename type="first">R</forename><surname>Bělohlávek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Snášel</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Znalosti</title>
		<imprint>
			<biblScope unit="page" from="309" to="316" />
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">The logic of quantum mechanics</title>
		<author>
			<persName><forename type="first">G</forename><surname>Birkhoff</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Von Neumann</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Annals of Mathematics</title>
		<imprint>
			<biblScope unit="volume">37</biblScope>
			<biblScope unit="page" from="823" to="843" />
			<date type="published" when="1936">1936</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<title level="m" type="main">Conceptual Spaces: The Geometry of Thougt</title>
		<author>
			<persName><forename type="first">P</forename><surname>Gärdenfors</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2004">2004</date>
			<publisher>MIT Press</publisher>
			<biblScope unit="page">307</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Morfologická analýza slovenčiny -Analýza flexných tvarov</title>
		<author>
			<persName><forename type="first">S</forename><surname>Horal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Kalinová</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Kostolanský</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Conference Proceedings Informatika</title>
				<meeting><address><addrLine>Bratislava</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2005">2005. 2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Linear algebra</title>
		<author>
			<persName><forename type="first">K</forename><surname>Jänich</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="s">Undergraduate Texts in Mathematics</title>
		<imprint>
			<date type="published" when="1994">1994</date>
			<publisher>Springer-Verlag</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">A solution to plato&apos;s problem: The latent semantic analysis theory of acquisition</title>
		<author>
			<persName><forename type="first">T</forename><surname>Landauer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Dumais</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Psychological Review</title>
		<imprint>
			<biblScope unit="volume">104</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="211" to="240" />
			<date type="published" when="1997">1997</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Using BFA with WordNet Ontology Based Model for Web Retrieval</title>
		<author>
			<persName><forename type="first">P</forename><surname>Moravec</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Pokorný</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Snášel</surname></persName>
		</author>
		<idno>12</idno>
	</analytic>
	<monogr>
		<title level="m">SITIS 2005</title>
				<editor>
			<persName><forename type="first">Richard</forename><surname>Chbeir</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Dipanda</forename><surname>Albert</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Yétongnon</forename></persName>
		</editor>
		<editor>
			<persName><forename type="first">Kokou</forename></persName>
		</editor>
		<meeting><address><addrLine>Dijon</addrLine></address></meeting>
		<imprint>
			<publisher>University of Bourgogne</publisher>
			<date type="published" when="2005">2005. 2005</date>
			<biblScope unit="volume">27</biblScope>
			<biblScope unit="page" from="254" to="259" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">WordNet Ontology Based Model for Web Retrieval</title>
		<author>
			<persName><forename type="first">P</forename><surname>Moravec</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Pokorný</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Snášel</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE WIRI 2005</title>
				<meeting><address><addrLine>Japan Tokyo</addrLine></address></meeting>
		<imprint>
			<biblScope unit="page" from="220" to="225" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Dokumentografické informační systémy</title>
		<author>
			<persName><forename type="first">J</forename><surname>Pokorný</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Snášel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Kopecký</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Karolinum, Skriptum MFF UK Praha</title>
				<imprint>
			<date type="published" when="2005">2005</date>
			<biblScope unit="page">184</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<title level="m" type="main">Introduction to modern information retrieval</title>
		<author>
			<persName><forename type="first">G</forename><surname>Salton</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Mcgill</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1983">1983</date>
			<publisher>McGraw-Hill</publisher>
			<pubPlace>New York, NY</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Automatic word sense discrimination</title>
		<author>
			<persName><forename type="first">H</forename><surname>Schütze</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Computational Linguistics</title>
		<imprint>
			<biblScope unit="volume">24</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="97" to="124" />
			<date type="published" when="1998">1998</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Features of similarity</title>
		<author>
			<persName><forename type="first">A</forename><surname>Tversky</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Psych. Rev</title>
		<imprint>
			<biblScope unit="volume">84</biblScope>
			<biblScope unit="page" from="327" to="352" />
			<date type="published" when="1977">1977</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<title level="m" type="main">The Geometry of Information retrieval</title>
		<author>
			<persName><forename type="first">K</forename><surname>Van Rijsbergen</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2004">2004</date>
			<publisher>Cambridge University Press</publisher>
			<biblScope unit="page">150</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Geometry and Meaning</title>
		<author>
			<persName><forename type="first">D</forename><surname>Widdows</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="s">CLSI Lecture Notes</title>
		<imprint>
			<biblScope unit="volume">172</biblScope>
			<biblScope unit="page">320</biblScope>
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Word Vectors and Quantum Logic Experiments with negation and disjunction</title>
		<author>
			<persName><forename type="first">D</forename><surname>Widdows</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Peters</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of Mathematics of Language</title>
				<meeting>Mathematics of Language<address><addrLine>Indiana</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2003-06">June 2003. 2003</date>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="page" from="141" to="154" />
		</imprint>
	</monogr>
	<note>Appeared in Mathematics of Language</note>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><surname>Zee</surname></persName>
		</author>
		<title level="m">Quantum Field Theory</title>
				<imprint>
			<publisher>Princeton University Press</publisher>
			<date type="published" when="2003">2003</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
