<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">What distributional semantics can (and cannot) tell us about meaning</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Alessandro</forename><surname>Lenci</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Università di Pisa and ILC-CNR</orgName>
								<address>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">What distributional semantics can (and cannot) tell us about meaning</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">96FAEB328AACDD9BF7839A6F4959C794</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T06:52+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Distributional semantics is a mainstream research paradigm in computational linguistics and cognitive science. It is based on a simple assumption: Semantic representations of lexical items can be built by recording their distribution in linguistic contexts. However, whether statistical co-occurrences alone are enough to address deep semantic questions, or whether they merely provide a shallow proxy of lexical meaning, remains an open question. In other words, what is the real descriptive and explanatory adequacy of distributional representations of meaning? In this talk, I explore this issue by presenting some research themes that shed light on the potentialities and the current limits of distributional models of meaning. The first theme is the notion of semantic similarity. Distributional semantics is based on the so-called Distributional Hypothesis stating that lexemes with similar linguistic contexts have similar meanings. However, distributional semantic models are actually more biased towards the much vaguer notion of semantic relatedness. The outcome of distributional models looks like a network of word associations, rather than a semantically structured space. This is an important weakness of current distributional semantic models. Though they have proven to be useful to capture various aspects of the mental lexicon, their limits in properly distinguishing different semantic relations also greatly impair the usability of distributional semantics to model logical inferences. A central aspect of human semantic competence is the ability to compose lexical meanings to form the interpretation of a potentially unlimited number of complex linguistic expressions. but compositionality is surely the "bottleneck" for distributional semantics. How distributional representations can be projected from the lexical to sentence or even discourse level is still an open issue. In this talk, I present a recent proposal of a distributional model of sentence comprehension in which sentence meaning is built by dynamically activating and unifying distributional information about events and their participants.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body/>
		<back>
			<div type="references">

				<listBibl/>
			</div>
		</back>
	</text>
</TEI>
