<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Dialog Management for a Social Assistive Robot in the Domain of Elderly Care</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Berardina</forename><surname>De Carolis</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Department of Computer Science</orgName>
								<orgName type="institution">University of Bari Italy Exprivia S.p.A</orgName>
								<address>
									<settlement>Molfetta</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Giampaolo</forename><surname>Flace</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Department of Computer Science</orgName>
								<orgName type="institution">University of Bari Italy Exprivia S.p.A</orgName>
								<address>
									<settlement>Molfetta</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">La</forename><surname>Forgia</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Department of Computer Science</orgName>
								<orgName type="institution">University of Bari Italy Exprivia S.p.A</orgName>
								<address>
									<settlement>Molfetta</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Nicola</forename><surname>Macchiarulo</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Department of Computer Science</orgName>
								<orgName type="institution">University of Bari Italy Exprivia S.p.A</orgName>
								<address>
									<settlement>Molfetta</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Giovanni</forename><surname>Melone</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Department of Computer Science</orgName>
								<orgName type="institution">University of Bari Italy Exprivia S.p.A</orgName>
								<address>
									<settlement>Molfetta</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Dialog Management for a Social Assistive Robot in the Domain of Elderly Care</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">199B65051F940354B9D9AE06B0C9D2F1</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-25T00:19+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Social Robot</term>
					<term>Conversational System</term>
					<term>Belief Desire Intention</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The world population is aging and there are main concerns of the aged care industry to provide appropriate care for elderly people as their health and independent functioning decline. Socially Assistive Robot technology could assume an important role in health and social care to meet this higher demand. In this paper, we present an architecture to support believable conversation capabilities of a social robot in the context of daily task assistance to elderly people. The conversational system is based on a BDI architecture that allows to mix deliberative and reactive reasoning in order to determine, at each step, which goals are valid and, consequently, which action to perform to reach a goal according to the current dialog context, including the emotional state of the user. The architecture has been preliminary tested on a low cost robot, Ohmni in this case, usually dedicated to telepresence and the results of this evaluation show that, from the dialog management point of view, it is robust enough to handle dialogs typical of the elderly care domain.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>In today's society, the geriatric population is steadily increasing and requires more and more assistance and monitoring <ref type="bibr" target="#b0">[1]</ref>. These needs are sometimes not fully met due to the shortage of elderly care personnel. In this context, one of the most ambitious challenges is to introduce Social Assistive Robots (SAR) to provide assistance to elderly relying on social interaction <ref type="bibr" target="#b1">[2]</ref>.</p><p>Then, the ability to communicate using natural language and high level dialog management is a fundamental requirement for a social robot since spoken dialogue is generally considered as the most natural way for human-robot interaction <ref type="bibr" target="#b2">[3]</ref>.</p><p>Research in the field of Human-Robot Interaction (HRI) is increasingly focussing on the development of robots equipped with intelligent conversational abilities by developing dialogue systems able to deal with real-world scenarios supporting specific tasks especially in social contexts.</p><p>Simulating human-to-human communication to enhance and ease human-to-machine communication is still a challenging task, especially when the goal is to enable a natural, adaptive, context and affect aware interaction.</p><p>To this aim, we developed an architecture of a Conversational System (CS) that is able to support believable conversations in the context of daily task assistance to elderly people.</p><p>In this work, a general architecture to implement a CS is presented, with the aim of supporting the conversational capabilities of social robots, even a low cost robot like the one used for developing the prototype presented in this paper: Ohmni, a low cost domestic robot with Android operative system, developed by OhmniLabs and usually dedicated to telepresence. It does not have a humanoid appearance, but is composed of a simple metal rod that supports a microphone, a speaker, a webcam and a touchscreen. It also comes with a mobile base that allows its movement. Such very rudimentary features, and the lack of conversational capabilities or SDK functions that permit its implementation, make Ohmni a perfect candidate for using the developed conversational system.</p><p>The proposed architecture is based on the Beliefs, Desires, Intentions (BDI) one. As explained in <ref type="bibr" target="#b3">[4]</ref>, BDI architectures are primarily focused on practical reasoning, i.e. the process of mixing deliberative and reactive reasoning in order to determining, at each step, which goals are valid and, consequently, which action to perform to reach a goal according to the current state of the world. Mixing reactive/proactive approaches enables the management of coherent conversations while still being responsive to unexpected user inputs.</p><p>The dialog module, then takes into account the agent's knowledge (the beliefs) and selects at each step the best goal to be fulfilled through the activation of an intention that the robot commits to. Intentions are then satisfied through the execution of plans that may include dialog acts and service activation. The model integrates a deliberative and a reactive components since goals are triggered, revised dynamically and satisfied by triggering appropriate plans, so that an appropriate response can be generated and returned to the user.</p><p>During the interaction between users and the robot, perceptions, as in general for conversational interfaces, are captured by analysing the speech of the user during the dialogs.</p><p>The voice captured through the robot's microphone is used both for converting the Speech To Text (STT) and for emotion analysis. The converted text is used by Natural Language Processing (NLP) techniques, in order to extract user intention (intent), significant entities, and sentiment polarity. The voice signal is then analysed with a specific software to recognize emotions. These information, also called extracted perceptions, along with background knowledge and the active goal, are interpreted as beliefs and used by a dialogue management component to trigger goals that will be satisfied by pre-defined plans, which are generally not very long as they have to adapt to the unpredictable interaction of the user, who after all, can say anything.</p><p>This architecture has been fully implemented and tested on the Ohmni robotic platform in the elderly care interaction scenario realized for the SI-Robotics project.</p><p>The remaining of the paper details this conversational architecture. The next section introduces the Italian project SI-Robotics. Section 3 provides an overview of the architecture as well as the dialog system that we have developed for our robot, going into details about its components. Section 4 provides a practical application of the conversational system in the context of elderly assistance. Sections 5, finally, summarise our main contributions and restate the key challenges that human-robot interaction brings to Artificial Intelligence.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">SI-Robotics</head><p>SI-Robotics is an Italian project that aims to design and develop innovative collaborative robotics solutions to support human beings in health and care services, in home, residential, and hospital environments. This project aims to produce advanced models of interaction, designed to motivating an active aging. These solutions are easily adaptable and allow to help elderly people in daily activities, anticipating their needs, and offering teleassistance, monitoring, and coaching services. To meet these needs, SI-Robotics proposes the realization of a cognitive agent able to interface with humans, IoT devices, social robots, and services present in the cloud <ref type="bibr" target="#b4">[5]</ref>. The development of the SI-Robotics project has been entrusted to Exprivia, which collaborates with sixteen other partners to design the system architecture, software services, and AI-based functionalities, also taking care of their integration.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">An Overview of the Conversational System</head><p>The Conversational System (CS) allows the social robot, Ohmni in this case, to operate as a virtual assistant, providing it with both responsive and deliberative capabilities based on the architecture illustrated in Figure <ref type="figure" target="#fig_0">1</ref>. It is composed by two main components:</p><p>• The Conversational User Interface (CUI), developed as a web application, running inside the robot's tablet, to enable interaction between the senior and Ohmni; • The Conversational Agent (CA), that based on the input received from the CUI, interprets</p><p>and processes an answer, along with a possible action to be performed. Figure <ref type="figure" target="#fig_0">1</ref> shows at a high level the interaction among these components. In particular, the touchscreen on the Ohmni robot allows to manage the interaction through the CUI that has been developed as a web application due to the constraints of the Ohmni management system <ref type="bibr" target="#b5">[6]</ref>. It is displayed in a special Android app that implements a WebView. The CUI allows not only to display multimedia content but also to manage the speech-based interaction with the user. The workflow of the interaction process will be described in details in the following section. The robot communicates with a cloud platform to recieve some events, such as the start of daily check-up or reminder task. The events are read by the CUI in order to notify the CA and a specific behavior is activated. Once the user's voice is recorded or an event is triggered, the CUI sends a request to the web API related to the CA in order to obtain a response in JSON format containing the following information: textual response to be played back to the user, one or more suggestions about what the robot expects the user to respond to, an HTML content to be rendered, and a possible action to be performed.</p><p>To provide a more general view of the components and subcomponents that make up the CS, Figure <ref type="figure" target="#fig_1">2</ref> shows the component diagram. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">The Conversational User Interface</head><p>For reasons strictly related to integration with the Ohmni robot, the CUI, shown in Figure <ref type="figure" target="#fig_2">3</ref>, is implemented as a web application, using Javascript as programming language. Its features include:</p><p>• Recording the user's voice while talking; • Communication with the bot by submitting recorded audio or captured photo; • Displaying the content of the CA response message (composed from: one textual answer, one or more suggestions, and possibly a multimedia content); • Speech synthesis to reproduce the textual response received from the bot;</p><p>• Visualization of the webcam video stream in a dedicated frame;</p><p>• Capture a photo using the robot's webcam;</p><p>• Communication with the cloud platform for receiving events;</p><p>• Execution of specific actions.</p><p>In particular, the HarkJS library is used for voice detection, while Ohmni Standalone library provided by Ohmnilabs is used for speech synthesis.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">The Conversational Agent</head><p>The CA implements the following functionalities:</p><p>• Conversion of received audio to text (Speech to Text) using the Google Speech service;</p><p>• Facial recognition of a person and of his/her gender from the face;</p><p>• Sending communication messages to the cloud platform;</p><p>• Storing and retrieving user data from the database;</p><p>• Extracting perceptions from text and audio;</p><p>• Managing the conversational flow through a dialog management function.</p><p>The last two aspects are explored in more detail in the subsequent sections.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.1.">Extraction of perceptions</head><p>The following perceptions are extracted during the dialog:</p><p>• User intention: goal or activity the user wishes to accomplish;</p><p>• Sentiment polarity: positive, negative, or neutral;</p><p>• Entities: useful information that can be found in a sentence; • Emotion: happiness, fear, anger, disgust, sadness, surprise.</p><p>As shown in Figure <ref type="figure" target="#fig_3">4</ref>, the first three perceptions are extracted using the converted text, while the emotion is detected directly from the analysis of the recorded audio using Vokaturi <ref type="bibr" target="#b6">[7]</ref>. To limit as much as possible the use of paid cognitive services on cloud, ad-hoc predictive models based on deep learning are trained for intent, entity, and sentiment recognition. In particular, intent and sentiment are predicted using Google BERT <ref type="bibr" target="#b7">[8]</ref>, while entities are extracted using a mixed approach based on Spacy <ref type="bibr" target="#b8">[9]</ref>, regular expressions, and text search considering also lemmas. Finally, the Vokaturi library is used for emotion recognition.</p><p>It should be noted that the use of BERT for intent recognition and sentiment analysis is justified by the fact that this model based on deep learning adopts a bidirectional architecture of transformers. Due to the use of the multi-head self attention mechanism, during term processing, BERT reads the entire word sequence at once, and this enables learning of contextual relationships between words (i.e. take into account the context of the sentence). In addition, it is one of the first models to adopt the technique of transfer learning within NLP. The pre-training sentence involves unsupervised learning by training the network on generic tasks, while in the fine-tuning phase, the network is reused and adapted to perform a more specific task by adopting a supervised approach. The downstream task used for both intent recognition and sentiment analysis during the fine-tuning process was sequence classification. The advantages of using transfer learning are: reduced training time, improved prediction, and the ability to use a relatively small dataset and still get good results.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.2.">Dialogue Management</head><p>Once perceptions are obtained, a dialogue-based mechanism, inspired by the BDI model, is used to obtain the response.</p><p>As in our approach, the conversational capabilities and the dialog management of an agent can be implmented through the BDI agent model <ref type="bibr" target="#b9">[10]</ref> that has been used successfully in a range of applications. This model allows for a mix of reactive behaviour and goal-directed reasoning and this supports different means for achieving a goal depending on the context and other factors <ref type="bibr" target="#b10">[11]</ref>. The mixed reactive/proactive model enables the management of coherent conversational activity while still being responsive to unexpected user input and aware of changes in the conversation context. BDI plans provide knowledge of how to perform different types of conversational activities, while appropriate Knowledge Bases (KBs) contain information about entities associated with those activities.</p><p>BDI agent-based approaches to dialogue management have been previously proposed (e.g., <ref type="bibr" target="#b11">[12,</ref><ref type="bibr" target="#b12">13]</ref>); however, these have typically been for task-oriented conversations (e.g.,accessing email or managing an appointment). In our approach we use the BDI framework to provide variability in the way a goal is progressively achieved, as well as in the conversational content <ref type="bibr" target="#b13">[14]</ref>.</p><p>The goals of the CA are activated on the bases of its belief, that are eventually revised according to changes in its perceptions. Goals are fulfilled through the selection of different plans whose execution is, in turn, adapted to different contexts (i.e. recognized emotions, time of the day, etc.) thus behaviors that are appropriate to the user's situation.</p><p>A goal is implemented through a parameterized function with: the converted text, the extracted perceptions, and a reference to the CA instance. The plans associated with this function are created by placing logical conditions on the values of perceptions, especially on intent. Goals are managed using a stack, so the function associated with the goal at the top of that stack is called to obtain the response.</p><p>This whole mechanism needs at least a goal function to work. For this reason, General goal has been implemented and placed by default as first element of the stack. In the case where there are no feasible plans (i.e. no condition in the goal function is satisfied), the CA initially responds in a generic way by notifying to the user that it has not understood and invites the user to repeat. In case this happens a second time, it will formulate the previous question again. While the dialog evolves, goals are put in the stack and the user's request is managed by the goal function.</p><p>Finally, to prevent the robot from responding when it is not needed, communication with the CA is initially disabled and then re-enabled by the user by uttering one of the following wake-words: robot or ohmni. conversation that could be performed by the system. Instead, in Figure <ref type="figure" target="#fig_4">6</ref>, screenshots of the CUI are shown during the dialogue between elder and robot. As shown in Figure <ref type="figure" target="#fig_5">7</ref>, testing of the conversational solution was conducted both from PCs via browser and on the Ohmni robot in the Exprivia enterprise. COVID-19 restrictions prevented evaluation in ALHs (Assisted Living Houses) so that testing could be conducted with elderly people. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusions and Future Work</head><p>The work described in this paper is a preliminary step towards a general architecture of a Social Assistive Robot for elderly people living in Assisted Living Houses. In this phase of our research we developed a conversational system based on a BDI architecture and we integrated it into the Ohmni robot, that is a low-cost product designed only for telepresence tasks, in order to endow it with conversational capabilities. The result of the prototype version obtained so far is satisfactory, but needs improvement especially in voice detection.</p><p>Future developments will focus on the following points:</p><p>• Add new assistive features by introducing more intents and plans;</p><p>• Adopt smart microphones capable of filtering noise to improve speech recognition;</p><p>• Evaluate the conversational system by carrying out usability and technology acceptance tests with real users (i.e. elderly people in ALHs); • Develop services in such a way that the conversational system can also be used on other platforms.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Conversational system architecture in the form of components and message flow.</figDesc><graphic coords="3,89.29,415.16,416.67,215.78" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Conversational system architecture in the form of components and message flow.</figDesc><graphic coords="4,89.29,299.09,416.71,189.15" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Conversational user interface implemented in the SI-ROBOTIC project.</figDesc><graphic coords="5,89.29,84.19,416.72,158.72" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: Extraction of perceptions with a practical example.</figDesc><graphic coords="6,89.29,84.19,416.71,170.21" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 6 :</head><label>6</label><figDesc>Figure 6: Examples of CUI responses during dialogue.</figDesc><graphic coords="9,89.29,84.19,416.70,147.01" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 7 :</head><label>7</label><figDesc>Figure 7: Testing of the conversational system from the browser and on the Ohmni robot.</figDesc><graphic coords="9,89.29,383.62,416.69,128.16" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0"><head></head><label></label><figDesc></figDesc><graphic coords="8,108.88,382.51,375.01,215.05" type="bitmap" /></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>Research supported by "SocIal ROBOTics for active and healthy ageing" (SI-ROBOTICS) project founded by the Italian "Ministero dell'Istruzione, dell'Università e della Ricerca" under the framework "PON 676 -Ricerca e Innovazione 2014-2020", Grant Agreement ARS01 01120.</p></div>
			</div>

			<div type="annex">
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">An Example in the context of Elderly Assistance</head><p>The conversational capability of the Ohmni robot have been finalized in order to support the following tasks:</p><p>• Registration of the senior profile; • Communication of relevant symptoms and general conditions; • Telepresence; • Information provision (i.e. weather); • Medicine Remind; • Daily Check-up.</p><p>To this aim, we trained our system to recognize 10 main intents and 38 smalltalks <ref type="bibr" target="#b14">[15]</ref>. Examples of intents that can be recognized during the conversation are the following:</p><p>• CheckUp (i.e. "let's make the daily check-up"); • RemindMed (i.e. "please remind me the pills I have to take today"); Note that while the entity LOC is recognized by the basic Italian model provided by Spacy, the entity SYMPTOM is trained manually. In addition, it was possible to use regular expressions to capture the NUM entity since the Google STT tool used automatically converts the numbers spoken by the user into numeric format. With the objective to clarify how the system is expected to dialog with seniors in the context of assisted daily life, the following example in Figure <ref type="figure">5</ref> is given. The figure describes a particular</p></div>			</div>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m" type="main">Elderly population across EU regions</title>
		<ptr target="https://ec.europa.eu/eurostat/web/products-eurostat-news/-/DDN-20200402-1" />
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<title level="m" type="main">Defining socially assistive robotics</title>
		<author>
			<persName><forename type="first">D</forename><surname>Feil-Seifer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Matarić</surname></persName>
		</author>
		<idno type="DOI">10.1109/ICORR.2005.1501143</idno>
		<imprint>
			<date type="published" when="2005">2005</date>
			<biblScope unit="volume">2005</biblScope>
			<biblScope unit="page" from="465" to="468" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">A survey of socially interactive robots</title>
		<author>
			<persName><forename type="first">T</forename><surname>Fong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Nourbakhsh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Dautenhahn</surname></persName>
		</author>
		<idno type="DOI">10.1016/S0921-8890(02)00372-X</idno>
	</analytic>
	<monogr>
		<title level="j">Robotics and Autonomous Systems</title>
		<imprint>
			<biblScope unit="volume">42</biblScope>
			<biblScope unit="page" from="143" to="166" />
			<date type="published" when="2003">2003</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Reasoning about rational agents, by michael wooldridge, mit press</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">M</forename><surname>Andrew</surname></persName>
		</author>
		<idno type="DOI">10.1017/S0263574701213496</idno>
	</analytic>
	<monogr>
		<title level="j">cambridge, mass</title>
		<imprint>
			<biblScope unit="volume">200</biblScope>
			<biblScope unit="page" from="459" to="462" />
			<date type="published" when="2001">2001</date>
		</imprint>
	</monogr>
	<note>Robotica</note>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<ptr target="https://www.exprivia.it/it/show-info-event-full.php?id_event=6009" />
		<title level="m">Si-robotics</title>
				<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<ptr target="https://docs.ohmnilabs.com/webapi" />
		<title level="m">Webapi -ohmni developer manual</title>
				<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<title level="m" type="main">Emotion detection: a technology review</title>
		<author>
			<persName><forename type="first">J</forename><surname>Garcia-Garcia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Penichet</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Lozano</surname></persName>
		</author>
		<idno type="DOI">10.1145/3123818.3123852</idno>
		<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="1" to="8" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Devlin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M.-W</forename><surname>Chang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Toutanova</surname></persName>
		</author>
		<idno>arXiv</idno>
		<ptr target="https://arxiv.org/abs/1810.04805v2.arXiv:1810.04805" />
		<title level="m">BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding</title>
				<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<title level="m" type="main">spaCy: Industrial-strength Natural Language Processing in Python</title>
		<author>
			<persName><forename type="first">M</forename><surname>Honnibal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Montani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Van Landeghem</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Boyd</surname></persName>
		</author>
		<idno type="DOI">10.5281/zenodo.1212303</idno>
		<ptr target="https://doi.org/10.5281/zenodo.1212303.doi:10.5281/zenodo.1212303" />
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">S</forename><surname>Rao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Georgeff</surname></persName>
		</author>
		<ptr target="https://www.semanticscholar.org/paper/BDI-Agents%3A-From-Theory-to-Practice-Rao-Georgeff/8bb51f40236fd06406f22b31fcacb381539c3bf9" />
		<title level="m">BDI Agents: From Theory to Practice</title>
				<imprint>
			<date type="published" when="1995">1995</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<title level="m" type="main">Developing Intelligent Agent Systems: A Practical Guide</title>
		<author>
			<persName><forename type="first">L</forename><surname>Padgham</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Winikoff</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2004">2004</date>
			<publisher>Wiley</publisher>
			<pubPlace>Hoboken, NJ, USA</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">An agent-based approach to dialogue management in personal assistants</title>
		<author>
			<persName><forename type="first">A</forename><surname>Nguyen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Wobcke</surname></persName>
		</author>
		<idno type="DOI">10.1145/1040830.1040865</idno>
	</analytic>
	<monogr>
		<title level="m">International Conference on Intelligent User Interfaces</title>
				<imprint>
			<date type="published" when="2005">2005</date>
			<biblScope unit="page" from="137" to="144" />
		</imprint>
	</monogr>
	<note>Proceedings IUI</note>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Goal-Based Communication Using BDI Agents as Virtual Humans in Training: An Ontology Driven Dialogue System</title>
		<author>
			<persName><forename type="first">J</forename><surname>Van Oijen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">A</forename><surname>Van Doesburg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Dignum</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-642-18181-8_3</idno>
	</analytic>
	<monogr>
		<title level="j">ResearchGate</title>
		<imprint>
			<biblScope unit="page" from="38" to="52" />
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<title level="m" type="main">Flexible conversation management using a bdi agent approach</title>
		<author>
			<persName><forename type="first">W</forename><surname>Wong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Thangarajah</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Padgham</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-642-33197-8_48</idno>
		<imprint>
			<date type="published" when="2012">2012</date>
			<biblScope unit="volume">7502</biblScope>
			<biblScope unit="page" from="464" to="470" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<monogr>
		<ptr target="https://github.com/microsoft/botframework-cli/blob/main/packages/qnamaker/docs/chit-chat-dataset.md" />
		<title level="m">Bot Frameowrk Tools</title>
				<imprint>
			<date type="published" when="2021-11-11">2021. 11. Nov. 2021</date>
		</imprint>
		<respStmt>
			<orgName>Microsoft</orgName>
		</respStmt>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
