<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">A Method for Dataset Creation for Dialogue State Classification in Voice Control Systems for the Internet of Things</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Ivan</forename><surname>Shilin</surname></persName>
							<email>shilinivan@corp.ifmo.ru</email>
							<affiliation key="aff0">
								<orgName type="institution">ITMO University Saint-Petersburg</orgName>
								<address>
									<country key="RU">Russian Federation</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Liubov</forename><surname>Kovriguina</surname></persName>
							<email>lyukovriguina@corp.ifmo.ru</email>
							<affiliation key="aff0">
								<orgName type="institution">ITMO University Saint-Petersburg</orgName>
								<address>
									<country key="RU">Russian Federation</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Dmitry</forename><surname>Mouromtsev</surname></persName>
							<email>mouromtsev@mail.ifmo.ru</email>
							<affiliation key="aff0">
								<orgName type="institution">ITMO University Saint-Petersburg</orgName>
								<address>
									<country key="RU">Russian Federation</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Gerhard</forename><surname>Wohlgenannt</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">ITMO University Saint-Petersburg</orgName>
								<address>
									<country key="RU">Russian Federation</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Roman</forename><surname>Ivanitskiy</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">ITMO University Saint-Petersburg</orgName>
								<address>
									<country key="RU">Russian Federation</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">A Method for Dataset Creation for Dialogue State Classification in Voice Control Systems for the Internet of Things</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">8698489335C27F2BA3B8AD988F468320</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-23T22:46+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>voice control systems for Internet of Things</term>
					<term>slot type classification</term>
					<term>command type classification</term>
					<term>dataset for voice-controlled devices</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>In recent years, speech-based interaction became an important method of communication with devices in the Internet of Things (IoT). Voice control interfaces involve all the challenges and difficulties of natural language understanding and human-computer communication. In this paper, we present a methodology to create initial training data for voice-controlled devices which helps to design and track dialogue system states. Using crowdsourcing, in a first step we collect simple commands that users might give to devices. These commands are analyzed and manually classified into 50 user-system interaction scenarios. In a second step, we design a set of potential system states after processing the initial user commands, and crowd workers are asked to provide multi-turn dialogues between a user and the device, which simulate the processes of resolving a system state towards completion. The resulting dataset contains 320 commands and their classification into interaction scenarios for the first step, and 640 multi-turn dialogues for step two, generated given 12 potential system states. Finally, we present a baseline for automatic classification of utterance type and slot types in user commands, which is important for dialogue state detection. The proposed methodology allows collecting dialogues for IoT devices, which cover a variety of system states and interaction patterns.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction and Related Work</head><p>Gartner, Inc. forecasts that 8.4 billion connected things will be in use worldwide in 2017, up 31 percent from 2016, and will reach 20.4 billion by 2020. The total spending on endpoints and services will reach almost $2 trillion in 2017 <ref type="foot" target="#foot_0">1</ref> . The pool of devices and Internet of Things (IoT) architectures and protocols grows rapidly, however, this domain lacks communication instruments, regarding both interaction between devices and human-machine interaction and cooperation. Communication interfaces for IoT devices are typically specific to producers, thus, the development of conversational agents and voice control systems for IoT is in high demand due to the universal nature of natural language and speech communication <ref type="bibr" target="#b11">[Portet et al., 2013</ref><ref type="bibr">, Aldrich, 2003]</ref>.</p><p>There are two different paradigms for the design of dialogue systems and voice interfaces. On the one hand, in the modular approach, the system is assembled using various knowledge extraction algorithms and typically integrates a number of modules for natural language understanding, a dialogue manager, knowledge bases, rule bases, ontologies and other models <ref type="bibr" target="#b7">[Jurafsky and Martin, 2017]</ref>.</p><p>On the other hand, end-to-end systems are trained directly from conversational data. This approach requires no hand-crafted feature engineering and annotation, but large dialogue datasets for training <ref type="bibr" target="#b9">[Lowe et al., 2017</ref><ref type="bibr" target="#b12">, Serban et al., 2015]</ref>. This requirements restricts end-to-end systems to certain domains and natural languages.</p><p>In the case of voice-controlled systems for IoT devices, the problems are aggravated. Here, often both specific training data and rule or knowledge bases are missing, and furthermore there is a large diversity of IoT architectures and protocols. Moreover, linguistic resources, describing devices, technologies, communication between users and IoT-systems are limited or do not exist. For example, the Google Speech Commands dataset<ref type="foot" target="#foot_1">2</ref> includes 65 000 short, one second long, commands (like "left!", "stop!"). Another source, the Mozilla Common Voice dataset<ref type="foot" target="#foot_2">3</ref> , comprising 500 hours of speech from 20 000 different volunteers, was developed for keyword spotting tasks and web applications control. These and similar datasets are intended to train speech recognition systems.</p><p>In this paper, we propose a first step to address the problem of missing datasets for training end-to-end systems. We introduce a methodology for dataset creation using a combination of crowdsourcing and domain experts, and implement and evaluate the method by creating initial small-scale datasets of IoT dialogues for the Russian language. This dataset is primarily intended for the development of voice-controlled systems for IoT, such systems are within the emergent paradigm of artificial cognitive systems. Within the paper, we not only focus on dataset creation, but also on automatic morphosyntactic annotation and classification of user commands and slot type identification.</p><p>The methodology can be summarized as follows: The first step involves crowdsourcing to create first-turn data, which contain initial commands or requests of a user to an IoT device. Then, we analyzed the first-turn commands and elaborated 50 scenarios of user-system interaction. Independently, we designed an initial set of system states (12 states), to which the system can transition after understanding the command and collecting context knowledge.</p><p>In a second iteration of crowdsourcing, the users are asked to generate natural language responses to the respective system state, and to simulate a dialogue which resolves the situation to a successful final state. Moreover, we present and evaluate a baseline approach to classify the first-turn user commands into command types and and a set of 5 binary features ("slots"), which can be used to track the dialogue state in future work.</p><p>Applying this methodology, we collect 320 first-turn command items, and 640 multiturn dialogues from the second crowdsourcing step. The proposed methodology allows for collecting dialogues of the necessary variety and representativeness for the described task, and it can be used to extend the dataset created in our experiments. To our best knowledge, there is no previous work which provides a comparable methodology or datasets. As a final remark about the dataset, it also contains automatically created annotations according to Universal Dependencies 2.0 standard (for morphology and syntax). The morphological and syntactical annotations were performed with the Russian language models for the Stanford CoreNLP library <ref type="bibr" target="#b8">[Kovriguina et al., 2017]</ref>. The current demonstration files of the dataset with the corresponding annotations are published in the project repository <ref type="foot" target="#foot_3">4</ref> .</p><p>The paper is structured as follows: Section 2 explains core aspects of this work, such as the methodology used for creating the dataset and to design the interaction scenarios and dialogue manager states. In Section 3 we present the experiments to automatically classify first-turn user commands, and we conclude with Section 4.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Dataset Creation for Dialogue State Tracking</head><p>In this section, we first discuss some fundamentals of dialogue managers and its influence on the data necessary for the development of conversational intelligence for IoT and then present the two steps of our methodology to create datasets for dialogue state identification in IoT systems.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">Fundamentals of Dialogue State Tracking for IoT Devices</head><p>Typical modern architectures of conversational agents include dialogue managers as a core module. Dialogue managers can be implemented with the use of several approaches, such as finite-state automata, frame-based methods, the information state update (ISU) approach, the belief-desire-intention (BDI) model, reinforcement learning, etc. <ref type="bibr" target="#b12">[Sungjin et al., 2016]</ref>.</p><p>Dialogue state trackers aim to identify the current conversation state based on a user's input and the previous conversation history, so that the dialogue manager can choose the best next action. Typically, a slot-filling algorithm in the natural language understanding module identifies relevant objects in the user's utterance and tries to find an appropriate slot, which is then processed by the dialogue manager. Analysis of human conversations with smart environment has shown, that resolving coreference, especially its abstract type, is a much harder task, because users are inclined to use abstract lexis (light, sound ) to denote the device (lamp, audio system, correspondingly). Therefore, a natural language understanding module needs algorithms, which can find empirical referents for abstract and new, previously unseen, concepts <ref type="bibr" target="#b6">[Jia et al., 2017]</ref>.</p><p>Conceptualizing conversational agents for the IoT domain as being emergent types of artificial cognitive architectures <ref type="bibr" target="#b13">[Vernon, 2014</ref><ref type="bibr" target="#b12">, Profanter, 2012]</ref>, we believe, that it is critical to involve context-sensitive knowledge obtained from devices, data storages, and device knowledge base into the space of dialogue states.</p><p>In a first step towards the goal of dialogue state tracking models for Internet of Things, it is necessary to isolate and analyze user request patterns for different devices. From this data, potential dialogue states can be designed. Furthermore, it is necessary to obtain users responses to given dialogue states, as well as to get natural language equivalents for the system state and the user interactions. In the following two-step dataset creation process, we present the methodology and first dataset creation experiments which tackle the described problems.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">Step I: First-Turn Dataset Creation and Scenario Identification</head><p>In the first step of dataset creation, we collect initial commands by the device user. We employ a crowdsourcing strategy, and provide the user with a task description, which, in summary, asks crowd workers to choose an arbitrary smart device from any smart environment and provide 2-4 operation commands for the chosen device. For example, such a command might be "Wash at 40 degrees" (given to a washing machine). In this step, 29 crowd workers were involved.</p><p>The crowd workers proposed a vast list of devices, from smart lamps, smart electricity/water meters to smart bread makers, smart bathroom sensors and garden paths with heating. For those devices, the crowd workers were instructed to formulate several natural language requests (or commands). As there were no restrictions on device selection, some devices were chosen by several users.</p><p>After collecting the results from the crowd workers, we analyzed and merged the commands and ended up with 50 scenarios, each of which containing from 2 to 23 natural language commands, 320 commands in total. All counts are given after the removal of duplicate commands and scenarios from the dataset.</p><p>A scenario is a string quadruple (D, C, P, L), where D is a device, which is typically represented by a list of sensors. C is an array of potential commands, connected to this device. P is a multidimensional array of parameters, associated with each command, and L is a set of arbitrary natural language commands, generated by informants (in our case: crowd workers). Therefore, a scenario contains starting phrases (first-turn utterances), which are typically used as commands for the device.</p><p>An example of a scenario<ref type="foot" target="#foot_4">5</ref> is as follows:</p><p>-Device D: a washing machine that has a sensor for load detection;</p><p>-Commands C: wash<ref type="foot" target="#foot_5">6</ref> ; -Parameters P : washing temperature of type integer, this parameter also has a default value.</p><p>-Natural language commands L:</p><p>1. Start washing at 23:05! 2. Wash the clothes!</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Wash at 90 degrees!</head><p>As a side note, the crowd workers have produced paraphrases for some commands, which are preserved in the dataset and can be used in future work, for example for paraphrase detection for a command.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3">Step II: Dialogue Manager States and Multi-turn Dialog Generation</head><p>Given the first-turn commands and the collected scenarios from step I, we analyzed potential system response states based on those command. Assuming a correct understanding of the command by the dialogue system, this led to a list of 12 possible system reactions to the first-turn requests, which correspond to dialogue manager states.</p><p>To make those dialogue manager states more intuitive, we present some examples of system reaction codes and natural language responses for a scenario with a washing machine:</p><p>1. System State / Response code 1. All necessary devices, commands and parameters have been found, there are no conflicts, the command will be run now.</p><p>Corresponding natural language response: Washing starts! 2. System State / Response code 2. All necessary devices, commands and parameters have been found, but a multiple choice problem exists (several entities/devices satisfy the query).</p><p>Corresponding natural language response: Which washing machine should be used: the one in the kitchen or the one in the bathroom?</p><p>3. System State / Response code 3. A parameter or parameter value was set incorrectly.</p><p>Corresponding natural language response: The washing temperature cannot be set to 15 degrees. Please choose between 30 and 90 degrees!</p><p>As mentioned, the system reaction codes correspond to the states of the dialogue manager, given that user request was correctly understood by the system. The proposed list of 12 dialogue manager states includes states of parameter conflicts, scheduled requests, identifying unknown or fuzzy parameters, cases with multiple choices, non-saturated frames, mistakes and the reporting of side effects.</p><p>After the definition of the 12 dialogue manager states, we did a second run of crowdsourcing processes. This time, the crowd workers were given the list of first-turn natural language utterances collected in step I, and the list of 12 system reaction codes. The workers were asked to verbalize the system reaction code in natural language, and create a dialogue of user responses and system reactions which completes the dialogue successfully.</p><p>The final dataset of dialogues generated in step II includes 640 dialogues, and number of turns per dialogue varies from 2 to 5. A sample dialogue is given below (U -user, Ssystem):</p><p>-U: Turn on the light! -S: Do you want to turn on the light in the kitchen or in the whole flat? -U: Only in the kitchen.</p><p>-S: Ok, done! 3 Experiments on Command Type and Slot Identification</p><p>In this section we describe two experiments performed on the commands collected in step I. So, the raw data comprises the 320 first-turn natural language requests to the smart environment. The goal of the experiments is to automatically classify the commands a) by the type of command, b) by slots, which represent various characteristics of the command, for example if the command contains a condition or specifies some device parameters. This classification of basic command type and slots will help in the construction of a dialogue system, which is part of future work.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Description of Command Classes</head><p>For the purpose of classifying the first-turn user commands, we use six features. The first feature is the command type: Command type: We currently distinguish three command types: request -1, (e.g., "How much water did I spend last month? "), explicit command -2, which can be directly mapped to an entity in a knowledge base via its label, and implicit command -3, which cannot be directly mapped, and the system has to guess, what to do (e.g., "Reduce the temperature! ").</p><p>These three types have little in common with typical speech acts classification and were introduced as a result of distinguishing implicit and explicit commands and requests (typically, to databases external to the IoT system) from commands.</p><p>The other 5 features are the slots describing important command characteristics 7 :</p><p>Thus, a vector of labels, characterizing a command, looks as follows: (2, 0, 1, 1, 0, 0) for the command "Turn off the light, when I leave the room". This is an explicit command (2), a device is not mentioned explicitly (0), a condition <ref type="bibr" target="#b0">(1)</ref> and location <ref type="bibr" target="#b0">(1)</ref> are present, no parameters are specified (0) and the command is not scheduled (0).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Model Setup</head><p>Given those six features, we created a gold-standard dataset by manually annotating each of the 320 commands collected in step I with a vector of 6 features: element at '0' position denotes command type, elements at positions '1-5' denote the slot types. The sequence of slot types is a one-hot vector, if objects corresponding to the particular slot are present in the request, the value is '1', else '0'.</p><p>For model training and evaluation, we used word embedding vector representations of the user commands. First, we train word vectors on the Aranea Russicum Maius corpus<ref type="foot" target="#foot_6">8</ref> . The size of the corpus is 1,200,001,911 tokens. The model was trained on the raw text without preprocessing using the word2vec library <ref type="bibr" target="#b10">[Mikolov et al., 2013]</ref>, using the skipgram algorithm and a vector size of 100 dimensions. With those word embeddings, we created vector representation of the whole user request by averaging the word vectors of words in a given request. In future work, we will experiment with more elaborate methods to represent the commands, for example weighting schemes for words, sentence embeddings, etc.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3">Experiment Results for Command Type Classification</head><p>First, we classify the commands into the three command types described above. In the gold data labelled by experts, the command types are distributed as follows: 64% of utterances are explicit commands to the system, 25% are requests and 11% are implicit commands. The classification of command types in those three classes was performed with the Weka machine learning library<ref type="foot" target="#foot_7">9</ref> using the word embedding representations of user commands.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4">Experimental Results for Slot Type Classification</head><p>The same classification algorithms and word embedding representations were used in the slot type classification tasks. According to manual annotation, the slots in user utterances are distributed as follows: Devices are mentioned in 31% of utterances, conditions -in 27%, locations -in 24%, and parameters and scheduling -in 14% each. Here, we have five binary classification tasks per user command, one per slot type.</p><p>Classification results for Device, Condition and Location slots can be adopted as baseline and used in future work. However, as indicated by predication accuracy of the slots P arameter or Scheduling, the baseline classifier seems to not have learned to discriminate those. For the P arameter and Scheduling slots, a similar prediction </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Conclusions</head><p>The paper presents and applies a methodology for creating datasets for voice communication with IoT devices. The method focuses on user commands which lead to specific dialogue manager states, and are finally resolved in a dialogue between the IoT system and the device user. The datasets created with crowdsourcing reflect those dialogue elements, and were also applied to classify user commands according to six features which will be used for dialogue state tracking in future work.</p><p>The main contributions of the paper include (i) a procedure for crowdsourcing voice control data from end users, (ii) the provision of datasets, (iii) automatic morphosyntactic annotation of the datasets, and (iv) baseline evaluation of various machine learning algorithms. The datasets created, esp. the natural language responses, which users formulated on behalf of the system, may also be re-used in other tasks, for example as patterns in natural language generation tasks.</p><p>Finally, there are multiple directions for future work. We plan to extend the dialogue datasets with crowdsourcing techniques, to improve on the classification baselines presented in the experiments section, and to apply the collected data for the creation of IoT dialogue systems.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 :</head><label>1</label><figDesc>Utterance Type Classification reached by always predicting value 0. Moreover, we plan to evaluate the classifiers on more balanced datasets in future work.</figDesc><table><row><cell>Classification Algorithm</cell><cell>Accuracy</cell></row><row><cell>k-nearest Neighbour (IBk)</cell><cell>0.7531</cell></row><row><cell>C4.5 Decision Trees (J48)</cell><cell>0.6406</cell></row><row><cell>Rule Learner (PART)</cell><cell>0.6313</cell></row><row><cell>Naive Bayes (NaiveBayes)</cell><cell>0.6719</cell></row><row><cell>Holte's 1R (One Rule) Classifier (OneR)</cell><cell>0.6125</cell></row><row><cell>Support Vector Machines (SMO)</cell><cell>0.7937</cell></row><row><cell>Logistic Regression (Logistic)</cell><cell>0.6562</cell></row><row><cell>AdaBoost (AdaBoostM1)</cell><cell>0.6500</cell></row><row><cell>LogitBoost (LogitBoost)</cell><cell>0.7500</cell></row><row><cell>Descision Stumps for Boosting (DecisionStump)</cell><cell>0.6344</cell></row><row><cell>accuracy can be</cell><cell></cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">https://www.gartner.com/en/newsroom/press-releases/2017-02-07-gartner-says-8-billion-connected-thin</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">https://ai.googleblog.com/2017/08/launching-speech-commands-dataset.html</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_2">https://voice.mozilla.org/ru/data</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_3">https://github.com/organizations/MANASLU8/VoiceIoT</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="5" xml:id="foot_4">In the text of the paper all examples are given in English translation. The original dataset is in Russian language.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="6" xml:id="foot_5">This command starts the washing process, but does not turn the washing machine on.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="8" xml:id="foot_6">http://aranea.juls.savba.sk/aranea_about/index.html</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="9" xml:id="foot_7">https://www.cs.waikato.ac.nz/ml/weka/</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Acknowledgments L.Kovriguina acknowledges support from the Russian Fund of Basic Research (RFBR), Grant No. 16-36-60055. G.Wohlgenannt acknowledges support from the Government of the Russian Federation (Grant 074-U01) through the ITMO Fellowship and Professorship Program.</p></div>
			</div>

			<div type="annex">
<div xmlns="http://www.tei-c.org/ns/1.0" />			</div>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m">Device, 1 -if mentioned, 0 -otherwise</title>
				<imprint/>
	</monogr>
	<note>Turn on the desk lamp</note>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<title level="m">is higher than 25 • C, set the split system to +18 • C</title>
				<imprint/>
	</monogr>
	<note>Condition, 1 -if mentioned, 0 -otherwise</note>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<title level="m">Location, 1 -if mentioned, 0 -otherwise</title>
				<imprint/>
	</monogr>
	<note>Start watering in greenhouse 17</note>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<title level="m">Set the brightness of the chandelier to 600 Lumen</title>
				<imprint/>
	</monogr>
	<note>Parameter, 1 -if mentioned, 0 -otherwise</note>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<title level="m">Schedule, 1 -if mentioned, 0 -otherwise</title>
				<imprint/>
	</monogr>
	<note>Turn on the oven at 9 pm</note>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Examples of requests are given in brackets, and</title>
	</analytic>
	<monogr>
		<title level="m">Smart Homes: Past, Present and Future // Inside the Smart Home</title>
				<editor>
			<persName><forename type="first">F</forename><forename type="middle">K</forename><surname>Aldrich</surname></persName>
		</editor>
		<meeting><address><addrLine>London</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2003">2003. 2003. 2003</date>
			<biblScope unit="page" from="17" to="39" />
		</imprint>
	</monogr>
	<note>the entity, corresponding to the slot type, is</note>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Learning concepts through conversations in spoken dialogue systems // Acoustics</title>
		<author>
			<persName><surname>Jia</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Speech and Signal Processing (ICASSP) 2017 IEEE International Conference on</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2017">2017. 2017. 2017</date>
			<biblScope unit="page" from="5725" to="5729" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<title level="m" type="main">Dialogue Systems and Chatbots // Speech and Language Processing</title>
		<author>
			<persName><forename type="first">Martin</forename><forename type="middle">;</forename><surname>Jurafsky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Jurafsky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">H</forename><surname>Martin</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2017">2017. 2017. 2017</date>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="page" from="121" to="137" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Russian Tagging and Dependency Parsing Models for Stanford CoreNLP Natural Language Toolkit // International</title>
		<author>
			<persName><surname>Kovriguina</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Conference on Knowledge Engineering and the Semantic Web</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2017">2017. 2017. 2017</date>
			<biblScope unit="page" from="101" to="111" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<title level="m" type="main">Training End-to-end Dialogue Systems with the Ubuntu Dialogue Corpus // Dialogue &amp; Discourse</title>
		<author>
			<persName><surname>Lowe</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2017">2017. 2017. 2017</date>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="page" from="31" to="65" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Using Recurrent Neural Networks for Slot Filling in Spoken Language Understanding</title>
		<author>
			<persName><surname>Mesnil</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1301.3781</idno>
	</analytic>
	<monogr>
		<title level="m">IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP)</title>
				<imprint>
			<date type="published" when="2013">2015. 2015. 2015. 2013. 2013. 2013</date>
			<biblScope unit="volume">23</biblScope>
			<biblScope unit="page" from="530" to="539" />
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
	<note>Efficient estimation of word representations in vector space //</note>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<title level="m" type="main">Design and evaluation of a smart home voice interface for the elderly: acceptability and objection aspects // Personal and Ubiquitous Computing</title>
		<author>
			<persName><surname>Portet</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2013">2013. 2013. 2013</date>
			<biblScope unit="volume">17</biblScope>
			<biblScope unit="page" from="127" to="144" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Cognitive architectures // HauptSeminar Human Robot Interaction</title>
		<author>
			<persName><forename type="first">S</forename><surname>Profanter ; Profanter</surname></persName>
		</author>
		<author>
			<persName><surname>Serban</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1512.05742</idno>
	</analytic>
	<monogr>
		<title level="m">A Survey of Available Corpora For Building Data-Driven Dialogue Systems</title>
				<imprint>
			<date type="published" when="2012">2012. 2012. 2012. 2015. 2015. 2015. 2016. 2016. 2016</date>
			<biblScope unit="page" from="11" to="21" />
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
	<note>// Proceedings of the SIGDIAL 2016 Conference</note>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<author>
			<persName><forename type="first">D</forename><surname>Vernon ; Vernon</surname></persName>
		</author>
		<title level="m">Artificial cognitive systems: A primer</title>
				<imprint>
			<publisher>MIT Press</publisher>
			<date type="published" when="2014">2014. 2014. 2014</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
