<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Human-Aware Interaction: A Memory-inspired Artificial Cognitive Architecture</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Roel</forename><surname>Pieters</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Mattia</forename><surname>Racca</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Andrea</forename><surname>Veronese</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Ville</forename><surname>Kyrki</surname></persName>
						</author>
						<title level="a" type="main">Human-Aware Interaction: A Memory-inspired Artificial Cognitive Architecture</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">51F6847BEDEE292F9C03C0B400A661F1</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T14:09+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>In this work we aim to develop a human-aware cognitive architecture to support human-robot interaction. Human-aware means that the robot needs to understand the complete state of the human (physical, intentional and emotional) and interacts (actions and goals) in a humancognitive way. This is motivated by the fact that a human interacting with a robot tends to anthropomorphize the robotic partner. That is, humans project a (cognitive, emotional) mind to their interactive partner, and expect a human-like response. Therefore, we intend to include procedural and declarative memory, a knowledge base and reasoning (on knowledge base and actions) into the artificial cognitive architecture. Evaluation of the architecture is planned with a Care-O-Bot 4.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>I. INTRODUCTION</head><p>As the western world is aging, solutions have to be found that ensure the current high-quality welfare state for the future. This research aims to assess the suitability of robotics for assistance and care. Such human-robot interaction should foremost be safe, intuitive and user-friendly. This implies that the robot must understand the person's tasks, intentions and actions, and must include a knowledge base for information storage and reasoning.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>II. PERCEPTION: INTENTION AND TASK MODELING</head><p>In order to provide assistance, the general state of the human, as well as the task should be known. Human attention can be used to understand a person's intentions and the task he/she is engaged in. By detecting the head pose of the human and projecting this into a 3D point cloud of the environment, a weighted attention map can be generated (Fig. <ref type="figure">1-left</ref>). Segmenting this map returns the object of interest and can be used to determine which task the person is engaged in <ref type="bibr" target="#b0">[1]</ref>. Additionally, by actively gathering information (e.g., the robot asking questions) a model of the task can be learned (Fig. <ref type="figure">1-right</ref>). This decision making problem under uncertain conditions can be modeled as a partially observable Markov decision process (POMDP). By solving the POMDP, the robot can refine the task model, supervise the task execution and provide assistance for the next phase <ref type="bibr" target="#b1">[2]</ref>. Fig. <ref type="figure">1</ref>: Left: Weighted attention map that returns three objects of interest, the plate received most interest (red). Right: Task modeling scenario. A person is making a sandwich while a NAO robot observes and asks questions to build a task model for assistance.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>III. COGNITIVE MODELING: MEMORY AND REASONING</head><p>The knowledge base is divided in declarative memory (semantic and episodic facts) and procedural memory (action library). Semantic facts is general knowledge to represent the beliefs, relations and intentions of the world, of humans and of objects. Episodic memory describes information about events and instances that occurred, e.g., what, where and when an event happened. The action library contains primitives and sequences of tasks available to the robot. For example, the task model is encoded as declarative knowledge and describes the intention and relation between states (phases) in a task. Moreover, it can also be described by an action sequence and event sequence (episodic knowledge). Reasoning over the knowledge base allows for fact checking, relation assessment and event comparison, and can be used for future predictions (internal simulation). Reasoning over the action library allows to reuse, adapt and augment actions and action sequences for different tasks.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>IV. SYMBOLIC TASK PLANNING AND EXECUTION</head><p>The main function of the symbolic task planner is to generate a suitable plan by checking if the task was experienced in the past (episodic memory in the knowledge base) and how (procedural memory in the action library). Missing information for a generated plan is obtained from perception and reasoning over the knowledge base and the action library. For example, actions take arguments that apply to internal variables and functions (e.g., object pose, speech recognition). High level execution ensures that the planned task is executed appropriately and the instructed goal is achieved (Fig. <ref type="figure">2</ref>).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>V. ROSE AND CARE-O-BOT 4</head><p>The proposed developments are part of the interdisciplinary research project ROSE (Robots and the Future of Welfare Services<ref type="foot" target="#foot_1">2</ref> ) which aims to study the social and psychological aspect of service robotics. In particular, one aim of Fig. <ref type="figure">2</ref>: Artificial cognitive architecture for human-aware interaction.</p><p>this project is to investigate the requirements for social HRI with elderly people and how these should be integrated in practice. This applies for both the technological requirements (i.e., what capabilities and algorithms are necessary) as well as the social requirements (i.e., what does the user want). The Care-O-Bot 4 will be used for human-robot interaction studies and evaluation of the proposed artificial cognitive architecture.</p></div>			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">All authors are with School of Electrical Engineering, Aalto University, Finland. Corresponding author: roel.pieters@aalto.fi</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">http://roseproject.aalto.fi/en/ Proceedings of EUCognition 2016 -"Cognitive Robot Architectures" -CEUR-WS</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" xml:id="foot_2">Proceedings of EUCognition 2016 -"Cognitive Robot Architectures" -CEUR-WS</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m" type="main">Action and intention recognition from head pose measurements</title>
		<author>
			<persName><forename type="first">A</forename><surname>Veronese</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Racca</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Pieters</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Kyrki</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
	<note>in preparation</note>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<title level="m" type="main">Active information gathering for task modeling in hri</title>
		<author>
			<persName><forename type="first">M</forename><surname>Racca</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Pieters</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Kyrki</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
	<note>in preparation</note>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
