<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Towards to the Anticipation in Simultaneous Interpreting</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Eva</forename><surname>Kiktová</surname></persName>
							<email>eva.kiktova@upjs.sk</email>
							<affiliation key="aff0">
								<orgName type="department">Faculty of Arts</orgName>
								<orgName type="laboratory">Language Information and Communication Laboratory</orgName>
								<orgName type="institution">Pavol Jozef Šafarik University in</orgName>
								<address>
									<settlement>Košice, Košice</settlement>
									<country key="SK">Slovakia</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Július</forename><surname>Zimmermann</surname></persName>
							<email>julius.zimmermann@upjs.sk</email>
							<affiliation key="aff0">
								<orgName type="department">Faculty of Arts</orgName>
								<orgName type="laboratory">Language Information and Communication Laboratory</orgName>
								<orgName type="institution">Pavol Jozef Šafarik University in</orgName>
								<address>
									<settlement>Košice, Košice</settlement>
									<country key="SK">Slovakia</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Mária</forename><surname>Pal'ová</surname></persName>
							<email>maria.palova@upjs.sk</email>
							<affiliation key="aff0">
								<orgName type="department">Faculty of Arts</orgName>
								<orgName type="laboratory">Language Information and Communication Laboratory</orgName>
								<orgName type="institution">Pavol Jozef Šafarik University in</orgName>
								<address>
									<settlement>Košice, Košice</settlement>
									<country key="SK">Slovakia</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Towards to the Anticipation in Simultaneous Interpreting</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">92FD18916B9697654F9B172C7E0EFC6B</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-23T23:35+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Anticipation</term>
					<term>nucleus</term>
					<term>percipient</term>
					<term>interpreting</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This paper describes a very fine, suprasegmental linguistic feature -anticipation. An 'anticipation nucleus' is a speech signal that refers to the uniquely modulated part of an utterance that indicates continuation, it also allows for reduced semantic content in the utterance. The extent that anticipation can be expressed relies on the phonetic, syntactic, and semantic capacity of a language. In this work the special attention to the prosodic structure of speech signal was paid regarding to this issue. The work provides information about performed perceptual detection of anticipation nuclei, describes the measurement of key prosodic parameters and presents obtained results of statistical analysis.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>Understanding what another is telling us is crucial to human civilization as we know it. Usually, we can grasp meaning without great effort; however, some situations require us to focus and check that our understanding is as intended. In conversation, we control the importance that we attribute to information, especially when it appears inconsistent or unbelievable. In some situations, we could take over for a moment from the person speaking to us, or finish the sentence that another has started. That such situations are possible, indicates that we have the ability to anticipate what another is about to say. Furthermore, if the speech utterance is consistent at all speech levels, the anticipation will be able to be performed without a significant burden <ref type="bibr" target="#b4">[5]</ref>.</p><p>In a foreign language, the ability to anticipate relies on an awareness of multiple levels of language, including non-verbal expressions. An interpreter will use all the information available to construct the resulting statement in their mind <ref type="bibr" target="#b2">[3]</ref>. The ability to anticipate in a foreign language is to some extent, a matter of learning interpretation strategies; these include intonation, semantic and syntactic patterns, which indicate how a statement will continue <ref type="bibr" target="#b0">[1]</ref>, <ref type="bibr" target="#b7">[8]</ref>, <ref type="bibr" target="#b10">[11]</ref>.</p><p>Anticipation in simultaneous interpreting means that an interpreter is able to complete the statement at the same Copyright c 2019 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). time as the speaker, or knows in advance the content of the future statement <ref type="bibr" target="#b1">[2]</ref>, <ref type="bibr" target="#b3">[4]</ref>, <ref type="bibr" target="#b5">[6]</ref>, <ref type="bibr" target="#b9">[10]</ref>. The ability to anticipate is a key competency for professional interpreters.</p><p>This paper has the following structure, in Section 2 the prosodic definition of anticipation nucleus is presented. Section 3 describes the research design, participants (interpreters) and the used sound data. The statistical analysis and its results are also presented. Section 4 focuses on the detected anticipation moments; it describes the parameters extracted from the anticipation nuclei and discusses the results. The final conclusion and discussion follows in Section 5</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Prosodic definition of anticipation nucleus</head><p>To explain the essence of the phenomenon examined we highlight that: If in the last one or last two syllables (their sonantic nuclei) of the rhythmic group, the fundamental tone, F 0 , rises, and is then followed by a pause filled with silence, a hesitation sound, or an intake of breath, it is likely that the following rhythmic group contains important information.</p><p>The last one or last two syllables of the rhythmic group modulated in this way and followed by a pause, constitute an anticipation nucleus.</p><p>The mention two realizations of anticipation nucleus is depicted in Fig. <ref type="figure">1</ref>. The realization of anticipation nucleus in the same syllables can occur in 'diftong' or 'hijat'.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Perceptual identification</head><p>The aim of the perceptual identification was to obtain the information about rhythmic groups, time of anticipation and a type of anticipation event.</p><p>Four interpreters participated in the present research; all had a university degree in French, two also had a degree in linguistics. Three had been interpreters at conferences for more than 10 years, and one for more than five years; one was also an interpreter for the European Union.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">The sound database</head><p>Analysed data include speeches of European Parliament members delivered in French (74 speakers; males and females). From a total of 7366 sentences consisting of rhythmic groups, a representative sample of 200 (100 + Figure <ref type="figure">1</ref>: Example of two different realizations of anticipation nuclei. First, a two-syllable occurance; second, a one-syllable occurance. 100) sentences was randomly selected. From MP4 format, sound data in mono mode at 44100 Hz sampling frequency, 16-bit were extracted.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Description of perceptual identification</head><p>Each sentence was thoroughly assessed by the four interpreters who recorded the number of rhythmic groups, the order number of the rhythmic group with the anticipatory nucleus, the time of the anticipation nucleus and the anticipation events (type). The rhythmic group refers to the melodically characteristic part of a compound sentence <ref type="bibr" target="#b8">[9]</ref>. It divides sentence into shorter parts according to the specific tone and time modulation, see Fig 2 <ref type="figure">.</ref> An example of data collected are depicted in Tab.1.</p><p>The aim of recording interpreters' perceptions was to define the anticipation nucleus on the oscillogram (Fig. <ref type="figure" target="#fig_0">2</ref>, upper part); this refers to defining the saturation point at which the participant has detected an anticipatory hint, and can anticipate one or several possible trajectories that the following speech could address.</p><p>Interpreters did not receive any advanced notice of the anticipation events. They described in their own words what information they anticipated and their expectations for the continuation of the sentence (anticipatory trajectory), see Tab.1.. Finally, in studies <ref type="bibr" target="#b4">[5]</ref>, <ref type="bibr" target="#b6">[7]</ref> 12 different anticipation events such as a core of utterance, change of theme, determinative syntagm (3), emphasis (9) etc., were defined.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3">Statistical evaluation of perceptual identification</head><p>Data gathered from the interpreters' perceptions were statistically evaluated. Findings indicate how compound sentences and anticipation moments are perceived.</p><p>Results revealed that in 70% of cases (A), linguistinterpreters were in agreement on the number of rhythmic groups that comprised a compound sentence (Fig. <ref type="figure" target="#fig_1">3</ref>). In the remaining 30% of cases, they differed on one (B), two (C), or more (D) of the rhythmic groups. The majority of disagreements occurred in compound sentences, which have more than five rhythmic groups; it should be noted that these sentences are particularly interesting for interpreters, as their high number of anticipatory moments create many opportunities for new speech utterances <ref type="bibr" target="#b3">[4]</ref>.</p><p>Next, statistical evaluation focused on the extent to which the interpreters could identify syllables with anticipation nuclei (Fig. <ref type="figure" target="#fig_2">4</ref>). The degree of discord is higher than in the previous evaluation. At only 9% of the time were all four participants in agreement on when the anticipation nucleus occurred; in other cases, one participant (B), two participants (C) or more (D), disagreed on the point of the anticipation nucleus.</p><p>Given that each anticipation nucleus is not clearly confirmed by agreement in the perception tests, results indicate that anticipation perception is largely individual. The presence of the anticipation nucleus is not a logical value but rather the opposite. The anticipation phenomenon could be better described as a grade of anticipation. This corresponds to the reality, that not each anticipation nucleus is confirmed clearly by the perceptions tests. The anticipation moment detected only one has the low grade of anticipation, or in other words it has a low probability of anticipation. Conversely, the anticipation moment which was confirmed several times through perception identifications, had a higher probability of anticipation. In both evaluations, the stochastic character of the anticipation phenomenon was confirmed.</p><p>In the next phase, standard deviations [13] of time realization of the anticipation nucleus were calculated for the first hundred sentences and for the second hundred sentences. Results are depicted in Fig. <ref type="figure" target="#fig_3">5</ref>. A key point to note is reduced spread of standard deviation in the second hundred sentences. As this round of perception tests were conducted approximately two weeks after the first round, reduced spread reflects more precise guidelines given to participants for how to determine an anticipation nucleus. In the first hundred, one participant strictly adhered to the criteria of increased intonation and the subsequent pause, as an exclusive criterion to identify the rhythmic group with anticipation; however, the remaining participant consciously combined this criterion with anticipatory moments provided by other linguistic factors, this was especially the case in 'flat' sentences that lacked intonation. Therefore, in the analysis described below, only the second hundred sentences were included.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Measurement of identified anticipation nuclei</head><p>According to the data obtained from four linguistsinterpreters, measurements of time, fundamental tone and intensity were performed in Speech Analyzer 3.0.1. First,      The same three parameter were extracted in the ultimate syllable ( T 2, F 0 2, I2). The measurement on the one syllable was performed in the case of the anticipation nucleus being identified on the same syllable (diftong, hijat). The pause duration (T 3) and the time of next utterance beginning (T 4) were also recorded. Tab.2 shows an example of extracted parameters from several sentences; F 0 difference (F 0 2 -F 0 1) is calculated in the last column.</p><p>The first example (6-29m) in Tab.2 represents a common linguistic setting to identify an anticipation nucleus (similar to Fig. <ref type="figure" target="#fig_0">2</ref>); the F 0 increase from the sonantic nucleus of penultimate to ultimate syllable is in the normal range. The second example (138-28f) represents the hyperprosody <ref type="bibr" target="#b4">[5]</ref>, when the base F 0 values and difference F 0 values are too high; hyperprosody undermines the anticipation process due to frequent emphasis on selected parts of an utterance without new information. Conversely (31-13m), when speech is too flat, referred to as hypoprosody <ref type="bibr" target="#b4">[5]</ref>, grasping key information and anticipating future content is also difficult.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1">Statistical evaluation of anticipated nuclei</head><p>In the next phase of investigating the anticipation phenomenon, statistical evaluation of identified anticipatory nuclei was performed. From a total of 140 detected anticipation nuclei, six were excluded owing to extreme values. The Shapiro-Wilk test <ref type="bibr" target="#b11">[12]</ref>, <ref type="bibr">[13]</ref> was used to investigate the data's normality.</p><p>First, the difference in the fundamental tone F 0 [Hz] of the last or last two syllables: F 0 2-F 0 1 was analyzed. Results show that at a significance level of 5%, the mean value of the selected difference of the fundamental tone F 0 2-F 0 1 is greater than 31.84 Hz and less than 40.11 Hz (Tab. 3).</p><p>Differences in the effective value of the intensity (measured in dB) between the last two syllables: I2-I1 were calculated. Results show that at the 5% significance level, the mean value of the selective distribution of the effective value of intensity I2-I1 is more than 0.83 dB and less than 2.54 dB (Tab 3).</p><p>Pauses are measured in seconds. The results presented in Tab. 3 show that at the 5% significance level, the average pause length of the sample is more than 360 milliseconds and less than 420 milliseconds. The pause can be filled with silence, hesitation sounds or an intake of breath. Pauses of longer duration (approximately more than one second) indicate the interruption of the anticipation process.</p><p>Of the above mentioned suprasegmental parameters, fundamental tone is the most relevant to anticipation nucleus detection, followed by pause duration, and finally, intensity change.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Conclusion and discussion</head><p>This paper contributes useful information to the topic of anticipation, particularly as it is used in simultaneous interpreting. It describes a process of perceptual identification of anticipation nuclei, evaluation of perception results, and then reports the extraction of relevant prosodic parameters (fundamental tone, intensity, time). It additionally contributes statistical evaluation parameters for measuring anticipation nuclei.</p><p>Although interpreting requires multiple skills and is largely individual, the results of the current study indicate that interpreters have the ability to identify the same anticipatory moment at which they can estimate the same content of future statements (anticipation type).</p><p>This research demonstrates a research model for collecting data for future studies on simultaneous interpreting. For such phonetics and linguistics research, data acquisition depends on the results of perception tests in which a group of people indicate their perceptions. This study therefore contributes to the body of literature on anticipation nuclei, which is currently being prepared in the French language covering 1000 sentences, which will also be perceptually evaluated by the same four linguist -interpreters. Their perceptions represent the probability of anticipation being used in analyzed sound data in the future and their contribution will be the foundation for further measurements of anticipation nuclei.</p><p>This analysis of key suprasegmental parameters indicates how anticipation nuclei are detected. In the future the parameters extracted here could be used to create a software tool able to detect the presence of anticipation nuclei.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Example generated using Speech Analyzer 3.0.1. displays raw waveform, spectrogram, relative intensity [dB] and F 0 [Hz]. Also it shows a part of the one sentence with an anticipation nucleus and background; and a decomposition of a sentence on rhythmic groups.</figDesc><graphic coords="3,50.79,80.17,485.39,160.08" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Results of perceptions: number of rhythmic groups.</figDesc><graphic coords="3,107.15,479.84,130.11,99.59" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: Results of perceptions: time of the anticipation nucleus.</figDesc><graphic coords="3,107.15,626.76,130.11,98.42" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: Results of perceptions: variance of time when anticipation nucleus is presented.</figDesc><graphic coords="3,307.56,478.17,231.30,172.68" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 :</head><label>1</label><figDesc>Comparison of perceptions for the same file.</figDesc><table><row><cell>Num. of percipient</cell><cell>File</cell><cell>Num. of rhythmic group</cell><cell>Rhythmic group with anticipation</cell><cell>Anticipation nucleus time</cell><cell>Supposed type of anticipation</cell></row><row><cell>1.</cell><cell>6-29m</cell><cell>3</cell><cell>1,2</cell><cell>0:00.700; 0:02.146</cell><cell>9,3</cell></row><row><cell>2.</cell><cell>6-29m</cell><cell>3</cell><cell>2</cell><cell>0:02.487</cell><cell>3</cell></row><row><cell>3.</cell><cell>6-29m</cell><cell>2</cell><cell>2</cell><cell>0.02:510</cell><cell>3</cell></row><row><cell>4.</cell><cell>6-29m</cell><cell>2</cell><cell>1</cell><cell>0:02.524</cell><cell>3</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2 :</head><label>2</label><figDesc>Examples of measured values.</figDesc><table><row><cell>File</cell><cell cols="8">T1 [s] F 0 1 [Hz] I1 [dB] T2[s] F 0 2 [Hz] I2 [dB] T3 [s] T4 [s] F 0 2 -F 0 1 [s]</cell></row><row><cell>6-29m</cell><cell>1.895</cell><cell>131</cell><cell>-2.6</cell><cell>2.165</cell><cell>177</cell><cell>-7.1</cell><cell>0.419 2.731</cell><cell>46</cell></row><row><cell cols="2">138-28f 3.220</cell><cell>222</cell><cell>-5.2</cell><cell>3.397</cell><cell>361</cell><cell>-2.4</cell><cell>0.335 3.844</cell><cell>139</cell></row><row><cell cols="2">31-13m 1.572</cell><cell>152</cell><cell>-16.9</cell><cell>1.797</cell><cell>137</cell><cell>-11.7</cell><cell>1.659 3.556</cell><cell>-15</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 3 :</head><label>3</label><figDesc>Location and scatter statistics of 134 anticipation nuclei.</figDesc><table><row><cell>Mean</cell><cell cols="2">Confidence interval</cell><cell>Min</cell><cell>Max</cell><cell>Standard</cell><cell>Standard</cell></row><row><cell>x</cell><cell>-95%</cell><cell>+95%</cell><cell></cell><cell></cell><cell>deviation</cell><cell>error</cell></row><row><cell></cell><cell></cell><cell></cell><cell cols="2">Fundamental tone differences</cell><cell></cell><cell></cell></row><row><cell>35.98</cell><cell>31.84</cell><cell>40.11</cell><cell>-17</cell><cell>91</cell><cell>24.20</cell><cell>2.09</cell></row><row><cell></cell><cell></cell><cell cols="4">Differences of the effective value of intensity</cell><cell></cell></row><row><cell>1.69</cell><cell>0.83</cell><cell>2.54</cell><cell>-10.90</cell><cell>14.60</cell><cell>5.00</cell><cell>0.43</cell></row><row><cell></cell><cell></cell><cell></cell><cell cols="2">Pause duration value</cell><cell></cell><cell></cell></row><row><cell>0.39</cell><cell>0.36</cell><cell>0.42</cell><cell>0.03</cell><cell>0.89</cell><cell>0.18</cell><cell>0.01</cell></row></table><note>fundamental tone F 0 1 and intensity I1 were performed.</note></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgement</head><p>This work was supported by the Slovak Research and Development Agency under the contracts No. APVV-15-0492 and No. APVV-15-0307.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Intonation and anticipation in simultaneous interpreting</title>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">G</forename><surname>Seeber</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Ecole de Traduction et d&apos;Interprétation</title>
				<imprint/>
		<respStmt>
			<orgName>Université de Genève</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Cognitive load in simultaneous interpreting: Model meets data</title>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">G</forename><surname>Seeber</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International journal of bilingualism</title>
		<imprint>
			<biblScope unit="volume">16</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="228" to="242" />
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">The role of anticipation in discourse: Text processing in simultaneous interpreting</title>
		<author>
			<persName><forename type="first">A</forename><surname>Adamowicz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Polish Psychological Bulletin</title>
		<imprint>
			<biblScope unit="volume">20</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="153" to="160" />
			<date type="published" when="1989">1989</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Strategies of simultaneous interpreting and directionality</title>
		<author>
			<persName><forename type="first">M</forename><surname>Bartłomiejczyk</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Interpreting</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="149" to="179" />
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Prosodic anticipatory clues and reference activation in simultaneous interpretation</title>
		<author>
			<persName><forename type="first">M</forename><surname>Palova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Kiktova</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">XLinguae</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="page" from="13" to="22" />
			<date type="published" when="2019-02">01XL/02. 2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<ptr target="https://www.languageconnections.com/2015/05/anticipation-in-simultaneous-interpreting/" />
		<title level="m">Anticipation in simultaneous interpreting</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">On prosodic anticipatory hint</title>
		<author>
			<persName><forename type="first">M</forename><surname>Pal'ová</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Zeleňáková</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">XLinguae</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="165" to="180" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Anticipatory Behavior in Adaptive Learning Systems</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">V</forename><surname>Butz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Sigaut</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Pezzulo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Baldassarre</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">From Brains to Individual and Social Behavior</title>
				<meeting><address><addrLine>Berlin Heidelberg</addrLine></address></meeting>
		<imprint>
			<publisher>Springer-Verlag</publisher>
			<date type="published" when="2007">2007</date>
			<biblScope unit="volume">378</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Ce + N + Relative : Sémantique et Prosodie</title>
		<author>
			<persName><forename type="first">G</forename><surname>Kleiber</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Sock</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Lingvisticae Investigationes</title>
				<imprint>
			<publisher>John Benjamins Publishing Company</publisher>
			<date type="published" when="2006">2006</date>
			<biblScope unit="volume">29</biblScope>
			<biblScope unit="page" from="251" to="273" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Analysis of turn-taking in the Slovak interview corpus</title>
		<author>
			<persName><forename type="first">S</forename><surname>Ondas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Juhar</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE International Conference on Emerging eLearning Technologies and Applications</title>
				<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="411" to="416" />
		</imprint>
	</monogr>
	<note>Proc. ICETA 2018</note>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">On the semantic and pragmatics of linguistic feedback</title>
		<author>
			<persName><forename type="first">J</forename><surname>Allwood</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Nivre</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Ahlsn</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Semantics</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="issue">1</biblScope>
			<date type="published" when="1992">1992</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Detection of anticipation nucleus using HMM and fuzzy based approaches</title>
		<author>
			<persName><forename type="first">E</forename><surname>Kiktova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Zimmermann</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">World Symposium on Digital Intelligence for Systems and Machines</title>
				<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="355" to="360" />
		</imprint>
	</monogr>
	<note>Proc. DISA 2018</note>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
