<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Knowledge Representation for Cognition-and Learning-enabled Robot Manipulation</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Daniel</forename><surname>Beßler</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Sebastian</forename><surname>Koralewski</surname></persName>
						</author>
						<author>
							<persName><forename type="first">Michael</forename><surname>Beetz</surname></persName>
						</author>
						<author>
							<affiliation key="aff0">
								<orgName type="department">Institute for Artificial Intelligence Am</orgName>
								<address>
									<addrLine>Fallturm 1</addrLine>
									<postCode>28359</postCode>
									<settlement>Bremen</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff1">
								<orgName type="department">Institute for Artificial Intelligence Am</orgName>
								<address>
									<addrLine>Fallturm 1</addrLine>
									<postCode>28359</postCode>
									<settlement>Bremen</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Knowledge Representation for Cognition-and Learning-enabled Robot Manipulation</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">D532D92B397B51D70EAF7AD99E9C1C52</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T09:51+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Knowledge representation and reasoning (KR&amp;R) systems are widely employed for the representation of abstract knowledge. Action models are usually representations of state transitions: Actions can be performed if all pre-conditions are met, and it is expected that the designated effects will take place when the action is executed. However, embodied agents need additional knowledge about how their body should be moved to achieve their goals without causing unwanted side effects. The proposed action representation is based on force dynamic events that occur when an embodied agent interacts with its world. We show how patterns of force events can be used to define semantics of action verbs. Robots use our model to acquire episodic memories which are stories of their performance coupled with sub-symbolic data, and they share their experience through the knowledge service openEASE.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Introduction</head><p>The cognition system of humans allows us to accomplish manipulation tasks very competently. This is possible through the organization of actions in terms of motion phases, and through the prediction of effects</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Introduction</head><p>The cognition system of humans allows us to accomplish manipulation tasks very competently. This is possible through the organization of actions in terms of motion phases, and through the prediction of effects that actions might cause in terms of force events that might occur.</p><p>In this work, we investigate an action model postulated in human psychology, and make use of it in an artificial system. The model was proposed by Flannagan et al. <ref type="bibr">(2006)</ref>. Actions are decomposed into motion phases with different subgoals. The subgoals are force dynamic events that also generate distinctive sensory feedback in the nervous system.</p><p>Intentions of others can not be monitored directly. Monitoring force events, on the other hand, is at least less problematic because events may be monitored in the physics engine of virtual worlds, or observed by some agent. This is, for example, that the hand gets into contact with the milk package before grasping it from the table, or that the package looses contact to the supporting surface when the agent performs a retracting motion after the milk has been grasped. One of the main reasons for investigating action models from human psychology in robotics is that action models in AI, such as PDDL <ref type="bibr" target="#b7">(Ghallab et al. 1998</ref>), usually do not have an appropriate level of abstraction for robots. In particular, action models in AI often abstract away from body motions and only concentrate on representing action pre-and postconditions, and sequences. Intelligent embodied agents need to bridge the gap between these representations with missing information and the actual execution of an action in the physical world. Bridging this gap is non trivial and a problem which is widely unsolved on the abstract level (i.e., by re-usable general knowledge). It is further expected that conditions and effects of actions are pre-defined -a hard to meet requirement with the diversity of effects actions may cause in the physical world.</p><p>The central question for successful embodied action execution is how agents should move their bodies to achieve certain effects while avoiding unwanted side-effects. This is, for example, how a robot should move its arm such that the pancake mix contained in the bottle it holds is poured on top of the pancake maker, and forms a pancake with 10cm diameter. In the area of AI there are only few approaches that address this problem despite the semantic nature of this reasoning problem. that actions might cause in terms of force events that might occur.</p><p>In this work, we investigate an action model postulated in human psychology, and make use of it in an artificial system. The model was proposed by Flannagan et al. <ref type="bibr" target="#b6">[6]</ref>. Actions are decomposed into motion phases with different subgoals. The subgoals are force dynamic events that also generate distinctive sensory feedback in the nervous system.</p><p>Intentions of others can not be monitored directly. Monitoring force events, on the other hand, is at least less problematic because events may be monitored in the physics engine of virtual worlds, or observed by some agent. This is, for example, that the hand gets into contact with the milk package before grasping it from the table, or that the package looses contact to the supporting surface when the agent performs a retracting motion after the milk has been grasped.</p><p>One of the main reasons for investigating action models from human psychology in robotics is that action models in AI, such as PDDL <ref type="bibr" target="#b7">[7]</ref>, usually do not have an appropriate level of abstraction for robots. In particular, action models in AI often abstract away from body motions and only concentrate on representing action pre-and post-conditions, and sequences. Intelligent embodied agents need to bridge the gap between these representations with missing information and the actual execution of an action in the physical world. Bridging this gap is non trivial and a problem which is widely unsolved on the abstract level (i.e., by re-usable general knowledge). It is further expected that conditions and effects of actions are pre-defineda hard to meet requirement with the diversity of effects actions may cause in the physical world.</p><p>The central question for successful embodied action execution is how agents should move their bodies to achieve certain effects while avoiding unwanted sideeffects. This is, for example, how a robot should move its arm such that the pancake mix contained in the bottle it holds is poured on top of the pancake maker, and forms a pancake with 10cm diameter. In the area of AI there are only few approaches that address this problem despite the semantic nature of this reasoning problem.</p><p>One of the peculiarities of our KR&amp;R system is that it runs inside the perception-action loop of a robotic agent. Symbols correspond to data structures of the robot control system, and as such they have a rather simple grounding. The representations in our system are inspired by the role that episodic memories play in the acquisition of generalized knowledge in the human memory system <ref type="bibr" target="#b18">[18]</ref>.</p><p>The proposed representation of episodic memories consists of two parts. One part stores experiences and events as symbolic data. Those events and experiences can be e.g. perceived objects, or performed actions, their duration, and possible failures. The second part stores sensor data from the robot in a database. We define this unstructured data as sub-symbolic data. In the first section, we will describe the symbolic knowledge representation. An overview about the subsymbolic data will be given afterwards. Then, we will show how those memories can be used to improve the robot's action models by getting insights about manipulation activities. This will be achieved by using a combination of query answering and visual analytic tools.</p><p>Our KR&amp;R system is made available as part of the knowledge web service openEASE<ref type="foot" target="#foot_0">1</ref>  <ref type="bibr" target="#b4">[4]</ref>. The web service gives the KR community the opportunity to do research in the context of real robot experiments. Researchers in the field of KR-based robot control can further extend the knowledge base of the web service by providing additional episodic memories of their robots performing manipulation activities.</p><p>We use the openEASE platform for storing and managing the episodic memories represented with our model. It also allows to ask queries about it, such as how the robot was moving when an action was performed, and to visualize snapshots of the activity with visual annotations. Figure <ref type="figure" target="#fig_0">1</ref> shows such an example where the robot was closing a drawer in a kitchen environment. The action is properly segmented into the different motion phases, which is also visible in the Figure. The vision is to collect a large data set of episodic memories, and utilize them for learning tasks to gain a better understanding about manipulation activities.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Related Work</head><p>There are several projects with efforts to provide symbolic knowledge about manipulation activities to robots. The most notable one is the IEEE-RAS working group ORA (Ontologies for Robotics and Automation) <ref type="bibr" target="#b13">[13]</ref>, which aims at defining standards for knowledge representation in robotics. Schlenoff <ref type="bibr" target="#b12">[12]</ref> also presented a related approach for detecting intentions in cooperative human-robot environments based on states which are more easily recognizable by sensor systems than actions. In his work, intentions are also used for the prediction of the next action. For this work, we extend the KnowRob system <ref type="bibr" target="#b17">[17]</ref>, which, among others, defines concepts for actions, and their effects <ref type="bibr" target="#b16">[16]</ref>. KnowRob also has a notion of motion phases, but these are not defined using force dynamics.</p><p>Another related branch of research is task and motion planning. In this work, we present an action model that can be used to yield higher level activities from observations of force events. Such force events can also be detected through haptic feedback, and be used to minimize uncertainty during manipulation activity planning <ref type="bibr" target="#b19">[19]</ref>. The relation of our system to general planning systems is that planning domains can be represented using our model and that plan parameters can be inferred from knowledge represented in our system. Action models in traditional planning systems (such as PDDL) often only consider action preconditions and their effects, and do not incorporate more detailed information about motions and forces. More recently, systems emerged that enable robots to perform planning on both task and motion level by introducing an interface layer between task and motion planner <ref type="bibr" target="#b14">[14,</ref><ref type="bibr" target="#b5">5]</ref>. Our action model could be used by such systems to represent tasks, and to define action pre-conditions which are occurrences of force events.</p><p>Another aspect is that our system can yield partial boundaries of motion phases given some observation. Motion segmentation methods typically apply some form of clustering to build stochastic representations of primitive motions and motion sequences. These methods include self-similarity <ref type="bibr" target="#b8">[8]</ref>, k-means <ref type="bibr" target="#b10">[10]</ref> or hierarchical <ref type="bibr" target="#b20">[20]</ref> clustering. Primitive motions are often represented as Hidden Markov Models <ref type="bibr" target="#b9">[9,</ref><ref type="bibr" target="#b11">11,</ref><ref type="bibr" target="#b15">15]</ref> and sequences as stochastic motion graphs <ref type="bibr" target="#b15">[15]</ref>. This research has mainly focused on body motions with some exceptions that also consider object movement <ref type="bibr" target="#b9">[9,</ref><ref type="bibr" target="#b11">11]</ref>. Contrary to our approach, the listed motion segmentation approaches do either not consider manipulated object movement or only consider its trajectory. Instead, we define motion boundaries according to interactions of objects with the physical world through force dynamics. These contact states seem particularly important for control strategies employed by humans <ref type="bibr" target="#b6">[6]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Narrative of Episodic Memories</head><p>This section introduces an action model for robots inspired by the Flanagan model. The basis of it are force events that occur when an agent moves its body, and the different motion phases of actions. Our ontology is organized along these areas. It has 4 levels: Force events, situations, motion phases, and intentional activities. In addition, we use rules to declare identity constraints. In this section we provide a description of how this information is organized and represented.</p><p>In this work, we build upon the KnowRob ontology, and (manually) extend it with concepts of our action model such as ForceEvent and PouringMotion. We have chosen KnowRob because it provides the necessary infrastructure for interfacing with robot control systems, and to record episodic memories from task execution. It defines concepts such as Event and Situation, and also specific ones to describe e.g. robots and their parts.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Force Events</head><p>At the lowest level of our action representation there are events that physical objects cause in a (simulated) physical world. They are described independently from intentions. This is to allow detecting them fully automated, without taking into account previous events and higher level knowledge about task or embodiment.</p><p>PhysicalEvent Event is the most general concept in this ontology. It implies that physical events occur at a particular time instant (derived from Event), and that at least one object is involved. Involved means that one of their physical properties is salient during the event. This is the case if the object involved is created or destroyed, touched or untouched, transformed into something else, etc.</p><p>The most essential events are the contact events (ContactEvent) that occur whenever an object moves in the world such that it touches (contact+ involved) another object within a spatial region (contactRegion).</p><p>The property contact+ is further decomposed into functional properties contact+ 1 and contact+ 2 denoting the two salient objects during the contact event (the two objects can be randomly assigned). The objects remain touched until they separate again which is indicated by a LeavingContactEvent. The contact is either caused by an agent moving objects into contact, or through a physical process such as gravity, for example, pulling an object such that it falls onto the floor.</p><p>Creation (CreationEvent) and destruction (De-structionEvent) events are also distinctive subgoals of activities that we use for activity representation at a higher level of our ontology (e.g., cutting a bread creates a slice of bread).</p><p>The last category of physical events we consider in the scope of this work are fluid flow events (Fluid-FlowEvent). These are events in which some liquid or gaseous substance moves, for example, milk flowing from a package to a glass, or water flowing in a river. Such events may be intended as in "pouring milk in a glass", or unintended as in "spilling milk on the floor during navigating". The primary involved object is the liquid or gaseous substance, linked to the event via the functional property fluid involved.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Force Situations</head><p>At the next level of our ontology there are situations during which force events occurred (ForceSituation Situation). Force events occur at time instants, for example, in the moment the hand touches some object, and when it leaves contact again. We use such temporal patterns of force events to expand them to distinctive situations.</p><p>Sub-events are linked to situations via the inverse functional event object property. With inverse functional we imply that each event can only be the subevent of a single situation. For detecting situations, we use two dedicated events: One indicating the start and the other indicating the end of the situation. These are represented using the functional properties starter event for the event starting the situation, and stopper event for the one stopping it.</p><p>Surely, starter event should occur before stopper event. Situations during which the object is not in contact could else be classified as contact situations. We use predicates from Allen's interval algebra <ref type="bibr" target="#b0">[1]</ref> and an identity constraint to assert this relation between starter and stopper event. As illustration, this constraint can be written as:</p><formula xml:id="formula_0">∀ instance of(x, ForceSituation) : ∃(stopper • after • starter − )(x, x)<label>(1)</label></formula><p>Note that the fact that some event occurred after another one is inferred on demand by our reasoner and does not need to be asserted. The begin time of the situation is further defined as time of occurrence of the starter, and the end time as the time of occurrence of the stopper.</p><p>The starter event of contact situations is the contact event and the stopper event is the leaving contact event. Both have exactly the same involved objects. We represent this type of information using identity constraint rules using a property chain starting from the starter event via involved objects, stopper event, and back to the starter event.</p><p>Fluid flow situations are a bit different because there are no distinct starter and stopper event types. At some time instant the first and at a later time the last fluid flow event of a situation occurs. However, not every sequence of fluid flow events referring to the same fluid makes a situation. If the container is put aside for a while, for example, one would rather say that the situation ended then, and that a new situation starts when the container is used later on. This can be enforced by asserting that, during fluid flow situations, the container may only be salient for fluid flow events.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Motion Phases</head><p>Motions can be detected by monitoring the joint configuration of an agent. Movements are either reflexive or intentional. But at this level of our ontology, without knowing intentions of agents, we can not distinguish between reflexive and intentional motions and represent motions solely in terms of expected events and body parts used.</p><p>The different body parts are defined in the KnowRob ontology. Here, we define a general "body part moved" concept for each of these body parts. We define the functional relation partMoved to represent which body part moved during a motion, and restrict the range of this property to the corresponding body part type. For ArmMovement's, for example, we assert: ∀partM oved.Arm and = 1partM oved.Arm. Force events salient for a motion are denoted by the inverse functional event relation. Temporal ordering constraints are asserted by temporal properties before, after, and during.</p><p>Here, we only investigate arm movements. Hand movements are also represented, but only at a coarse level using a boolean state: Opened or closed. We also ignore gaze motions in this work. However, it would be interesting to look into gaze contact events and to compare gaze patterns for different expert levels in future work.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Arm Movement</head><p>Arm movements are fundamental for object manipulation. The repertoire of different arm motions of humans is rich: reaching, lifting, throwing, cutting, pouring, etc. Some of which have distinct patterns of force dynamic events, such as cutting, that we use for representing them.</p><p>We use force events as delimiters of motion phases. In particular important are contact situations between body parts and other objects. Motions during which lifetime the contact between body part and object is continuously salient are called carrying motions (Car-ryingMotion). The body part in contact with the object must be part of the body part (denoted by partOf ) which is moved during the motion. This is to allow, for example, that the contact occurs between hand and tool while the body part referred to by the motion is the arm (which in turn has a hand part).</p><p>Objects held by agents may also touch other objects or liquids during the motion, causing distinct force events during that interaction. We use this pattern of force events for the representation of tool motions. A cutting motion, for example, is a carrying motion, performed with a cutting tool, during which some object was cut into pieces. Cutting events may also be destruction events in case the object cut into pieces entirely disappeared. We further assert that the tool used in the cutting event (cutter ) is also salient during the carrying situation.</p><p>Another challenging manipulation task is pouring. It can be performed in many ways, and on many different expert levels. The motion profiles of different expert levels are drastically different, but they all generate fluid flow events when particles are leaving the source container. We represent pouring motions as contact situations with a subgoal which is a fluid flow situation. First, we state that pouring motions are carrying situations where a container that contains some fluid is a salient object, and that at least one fluid flow situation is a subgoal of this situation. We further state that the fluid transported in fluid flow events of subgoals is exactly the fluid inside of (contains) the contacted container.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Activities</head><p>At the highest level of our ontology there are activities composed of motions with expected event patterns. At this level of the ontology, the intention of agents is implied by action concepts. The standard example quoted in the work of Flanagan et al. is a fetch-andplace activity. During fetch-and-place tasks, there is a contact situation between agent and fetched object, and also distinct events indicating that the carried object first leaves contact to a supporting surface, and later gets into contact with a supporting surface again.</p><p>We state that fetch-and-place activities have a submotion which is a carrying motion. And that there are two additional force events linked to the action via the subevent relation. We further state that there is a subevent in which the carried object looses contact to a supporting surface.</p><p>At this level, we can distinguish between colliding, supporting, and intentionally touching. Unexpected contacts during an activity are classified as collisions. This makes it very easy to detect them. With expected we mean that the activity concept asserts their occurrence during the activity.</p><p>We use the same scheme to distinguish between pouring and spilling: Pouring actions have intended fluid flow subgoals while spillage events are exactly the unintended fluid flow events occurring during an action. More concretely, pouring actions have a target location where the fluid should be poured into or onto. We classify all fluid flow events where the fluid is transported to somewhere else then the target location as spillage events.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Experience of Episodic Memories</head><p>Experience data captures low-level information about experienced activities represented as time series data streams. This data has often no or only unfeasible lossless representation as facts in a knowledge base. To make this data knowledgable, procedural hooks are defined in the ontology to compute relations from the experience data, and to embed this information in logicbased reasoning.</p><p>The data is stored in a NoSQL database using JSON documents. Each individual type of data is stored in a collection named according to the type of data stored in it. When imported, the knowledge system stores the data in a MongoDB<ref type="foot" target="#foot_1">2</ref> server, for which the knowledge system implements a client for querying the data during question answering.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Pose Data</head><p>A robotic system typically has many mobile components arranged in a kinematic chain. Each component in a kinematic chain has an associated named coordinate frame such as world frame, base frame, gripper frame, head frame, etc. 6 DOF relative poses are assigned to frames. These are usually updated with about 10 Hz during movements, and expressed relative to the parent in the kinematic chain to avoid updates when only the parent frame moves. The transformation tree is rooted in the dedicated world frame node (also often called map frame).</p><p>The data is used by our knowledge system to answer questions such as: "Where was the base relative to the object, 5 seconds ago".</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Reasoning with Episodic Memories</head><p>The knowledge represented in acquired experiences is very comprehensive. It not only contains narrations of activities but also raw experience data. Competent robot behavior needs both: Experience data encodes particularities of motions such as forces and velocities, and the narrative is required to make sense of the data at higher cognitive levels.</p><p>Here, we provide reasoning examples with our action representation. We first describe how activities can be obtained from force events, and also how an agent can make sense of action concepts. We finally outline some analytical reasoning tasks that can be performed on episodic memories.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Activity Parsing</head><p>In virtual worlds, force dynamic events can be monitored perfectly. These can be asserted to the knowledge base as they occur. Given the occurrence of force events, we can infer new knowledge using descriptions from higher levels of our ontology. In the first step, the events are expanded to situations. The situations are then refined to motions with distinct force event patterns. Finally, high level activities are detected based on patterns of force events and motions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Expanding Force Events</head><p>The expansion process exploits representations of situation concepts to identify events that determine the situation. Situations are determined by so called starter and stopper events. The events are processed from earliest to latest. A situation symbol is created when a starter event was detected, and a triple that specifies the starter relation is asserted. The procedure stores a list of situations without stopper events. For each new event, this list is first iterated to test whether the event is a stopper event of the situation, and a triple that specifies the stopper relation is asserted if this is the case. Finally, it is also tested if new events are sub-event of one of the situations without stopper.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Classifying Motions</head><p>We assume that arm motions are only segmented by zero velocity segmentation in advance. We use force events as delimiters for coarse-grained segmentation. We think that this segmentation is sufficient because it captures the force events which are the essential subgoals of manipulation activities. Here, we only consider arm motions. For each situation during which an arm motion occurred, we iterate through the different subclasses of ArmMotion which are also contact situations, and we test if classifying the situation with that type would yield a contradiction. The motion type is asserted if this is not the case. The motion classes are disjoint such that situations can only be classified as being instance of one of the motion classes.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Parsing Activities</head><p>Motions and force events are then used as building blocks for activities. Activities can be parsed using rules that detect temporal patterns of events and motions that are distinctive for them. Force events and motions that are subgoals of activities are denoted by the subevent and submotion properties. Patterns with partial ordering constraints can be inferred from this model. The output of the parser is an ontology, describing instances of detected actions. Here, we provide one hand-written rule that is used to detect pick-andplace activities shown in Algorithm 1.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Algorithm 1 Detect Pick-and-Place</head><p>1: procedure detect-pick-and-place</p><p>gain-support-event(?s, ?ev2, ?obj),</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>7:</head><p>before(?ev1, ?ev2).</p><p>8: procedure loose-support-event(?s,?ev,?obj)</p><formula xml:id="formula_2">9:</formula><p>contact-(?ev, ?obj), contact-(?ev, ?t), 10:</p><p>SupportingSurface(?t), 11:</p><p>stopper(?s, ?x), before(?ev, ?x).</p><p>12: procedure gain-support-event(?s,?ev,?obj)</p><p>13:</p><p>contact+(?ev, ?obj), contact+(?ev, ?t),</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>14:</head><p>SupportingSurface(?t), 15:</p><p>starter(?s, ?x), after(?ev, ?x).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Activity Interpretation</head><p>Our ultimate goal is to enhance the performance of robots by supplying them with knowledge about everyday activities, and in particular with high-level stories about what happened combined with experience data.</p><p>In this section, we provide a description of how robots may use the information represented in episodic memories.</p><p>A typical query first asks for a particular semantic action that fulfills certain constraints such as being successful, being performed by a particular agent, etc. The inferred action symbol is bound to a variable which is used as index to sub-symbolic data in the experience part of episodic memories. This is done to access data slices corresponding to the semantic activity for which the symbol was inferred earlier. An example of such a query is shown in the following: Which corresponds to the question "Where did the robot stand at the end of put-down actions?".</p><p>Based on our model, we can also ask questions about the goals of an action, for example, "What motion phases are the subgoals of an action". For our introductory example of a robot closing a drawer (see Figure <ref type="figure" target="#fig_0">1</ref>), the motion phases can be queried with a query such as: For a more detailed description of the question answering system used here, please consult the system paper written by Beetz et al. <ref type="bibr" target="#b1">[2]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Activity Analytics</head><p>Episodic memories are very comprehensive and additional tools for inspection are required. For illustration, we pick one simple pick-and-place task performed by a robot and show how our visual analytics tools are used to get insights about manipulation activities and reasoning processes. Our goal is to provide tools for gathering data for learning algorithms, and to learn about the requirements for robots performing everyday activities. Clustering methods may be used, for example, to group actions based on their parameterizations, and to identify e.g. what kind of actions require two arms to be performed successfully, or what kind of actions require additional tools. Different components of our analytics framework will be described below.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Action Hierarchy Visualization</head><p>Cognition-enabled plan frameworks, such as CRAM <ref type="bibr" target="#b2">[3]</ref>, generate action hierarchies instead of sequences of actions. This is because, in cognition-enabled plans, most actions are abstract and require reasoning which results in action hierarchies. For instance, a pick and place action requires a "pick" sub-action to be performed followed by a "place" sub-action. Action hierarchies are stored in our episodic memory as symbolic data. To get a better understanding of an experiment, openEASE contains a component to visual the whole action hierarchy. This visualization gives an overview about what actions were executed by the robot, the relationship between those actions and which tasks were successful and which not.</p><p>With our visual analytics framework we want to go beyond showing just hierarchies and statistics. Each visual component is linked to the knowledge base which allows us to perform queries on the displayed data. To be specific, the nodes in the action hierarchy can be selected by the user, and the user can ask queries about them such as getting the error type of an unsuccessful task, the time duration, etc. In addition, trajectories during actions can be queried and visualized. Having the experience data linked to the narrative of an activity further allows to correlate success of an action with e.g. the goal pose relative to the base.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Visualization of Errors</head><p>For every episodic memory we can request a cooccurrence matrix between actions and errors which occurred during an activity. Figure <ref type="figure">2</ref> shows an error matrix for a pick and place activity. The rows and columns can be sorted by frequency to get quickly an overview which actions failed the most or which error type occurs the most. Referring to Figure <ref type="figure">2</ref>, the matrix shows that most failed action was MovingToLocation due to collision. We are also using the error matrix to extract action preconditions which were not considered during plan design. Currently we are extracting the preconditions manually. In the future we are planning to automatize this extraction so the robot can extend its action model by itself.</p><p>The matrix is also linked to the knowledge base, this allows us to query detailed information about the errors. For instance, for perception errors we can query which objects could not be perceived. Those queries can give us an overview e.g. for which objects the perception system might need to be improved.  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Visualization of Reasoning Tasks</head><p>Cognition-enabled plans require a significant amount of reasoning. We provide multiple visualization tools available to get insights about reasoning processes.</p><p>Figure <ref type="figure" target="#fig_5">3</ref> shows a co-occurrence matrix with the action types (rows) and the reasoning questions (columns) which are asked during a pick and place action. This matrix gives an overview which reasoning tasks were performed the most and which tasks required the most reasoning. In our example, a significant amount of spatial and perception reasoning tasks were performed. Our analytics framework serves additional statistics, such as depicted in Figure <ref type="figure">4</ref>. The left pie chart shows the ratio between the frequency of reasoning tasks compared to actions. A high number of reasoning tasks indicates the robot performed a very abstract plan since it required a lot of reasoning to be able to execute it. The right pie chart in Figure <ref type="figure">4</ref> depicts an overall time usage between reasoning and action execution. Note that even though the general amount of reasoning tasks is significantly higher than the number of actions, the action execution requires the most time. This insight gives us the the opportunity to let the robot do more expensive reasoning in the future without extending the overall experiment runtime because we could run the reasoning in parallel during the action execution.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Conclusion</head><p>In this paper, we have introduced an approach for representing episodic memories of embodied agents performing manipulation tasks. The action model is inspired by a model from human psychology. Its representations are based on force dynamic events which are used to define semantics of action verbs. We have shown that patterns of force events can be used to detect intentions, and what actions an embodied agent performed. The action model is coupled with experience data that stores control level information. We believe that collections of episodic memories are key for understanding how experiential knowledge about manipulation tasks can be generalized.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: PR2 closing a drawer in a kitchen. The action is decomposed into different phases with distinct motion and force event pattern.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: PR2 closing a drawer in a kitchen. The action is decomposed into different phases with distinct motion and force event pattern.</figDesc><graphic coords="1,231.15,221.97,279.60,174.77" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head></head><label></label><figDesc>e n t i t y ( Act , [ an , a c t i o n , [ type , p u t t i n g d o w n ] ] ) , o c c u r s ( Act , [ , End ] ) , h o l d s ( p o s e ( p r 2 : ' p r 2 b a s e l i n k ' , Pose ) , End ) .</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head></head><label></label><figDesc>e n t i t y ( Act , [ an , a c t i o n , [ type , c l o s i n g a d r a w e r ] , [ part moved , [ an , o b j e c t , [ b a s e l i n k , HandBase ] ] ] ] ) , f i n d a l l (M, e n t i t y ( Act , [ s u b m o t i o n , M] ) , Motions ) .</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>1 Figure 2 :</head><label>12</label><figDesc>Figure 2: Co-occurrence matrix between actions and errors. The cell values indicate how many times the error occurred for each action type.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Co-occurrence matrix between actions and reasoning tasks. The cell values indicate how many times a reasoning task was performed during each action type.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>2 Figure 4 :</head><label>24</label><figDesc>Figure 4: The left chart shows the frequency of reasoning tasks (791) compared to the number of performed actions (124). The right chart shows how much time was spend during action execution (180.61 sec) and resoning (3.68 sec).</figDesc></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">http://www.open-ease.org/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">https://www.mongodb.com/</note>
		</body>
		<back>

			<div type="funding">
<div xmlns="http://www.tei-c.org/ns/1.0"> *   <p>The research reported in this paper has been supported by the German Research Foundation DFG, as part of Collaborative Research Center (Sonderforschungsbereich) 1320 "EASE -Everyday Activity Science and Engineering", University of Bremen (http://www.ease-crc.org/).</p><p>⇤ The research reported in this paper has been supported by the German Research Foundation DFG, as part of Collaborative Research Center (Sonderforschungsbereich) 1320 "EASE -Everyday Activity Science and Engineering", University of Bremen (http://www.ease-crc.org/).</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Maintaining knowledge about temporal intervals</title>
		<author>
			<persName><forename type="first">J</forename><surname>Allen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Communications of the ACM</title>
		<imprint>
			<biblScope unit="volume">26</biblScope>
			<biblScope unit="issue">11</biblScope>
			<biblScope unit="page" from="832" to="843" />
			<date type="published" when="1983">1983</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Knowrob 2.0 -a 2nd generation knowledge processing framework for cognition-enabled robotic agents</title>
		<author>
			<persName><forename type="first">M</forename><surname>Beetz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Beßler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Haidu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Pomarlan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">K</forename><surname>Bozcuoglu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Bartels</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ternational Conference on Robotics and Automation (ICRA)</title>
				<meeting><address><addrLine>Brisbane, Australia</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Cram-a cognitive robot abstract machine for everyday manipulation in human environments</title>
		<author>
			<persName><forename type="first">M</forename><surname>Beetz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Mösenlechner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Tenorth</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Intelligent Robots and Systems (IROS)</title>
				<imprint>
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<title level="m">IEEE/RSJ International Conference on</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2010">2010</date>
			<biblScope unit="page" from="1012" to="1017" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Open-EASE -a knowledge processing service for robots and robotics/ai researchers</title>
		<author>
			<persName><forename type="first">M</forename><surname>Beetz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Tenorth</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Winkler</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE International Conference on Robotics and Automation (ICRA)</title>
				<meeting><address><addrLine>Seattle, Washington, USA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
	<note>Finalist for the Best Cognitive Robotics Paper Award</note>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Incremental task and motion planning: A constraint-based approach</title>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">T</forename><surname>Dantam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><forename type="middle">K</forename><surname>Kingston</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Chaudhuri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">E</forename><surname>Kavraki</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Robotics: Science and Systems</title>
				<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Control strategies in object manipulation tasks</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">R</forename><surname>Flanagan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">C</forename><surname>Bowman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">S</forename><surname>Johansson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Curr. Opin. Neurobiol</title>
		<imprint>
			<biblScope unit="volume">16</biblScope>
			<biblScope unit="issue">6</biblScope>
			<biblScope unit="page" from="650" to="659" />
			<date type="published" when="2006-12">Dec 2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">PDDL-the planning domain definition language</title>
		<author>
			<persName><forename type="first">M</forename><surname>Ghallab</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Howe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Knoblock</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Mc-Dermott</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Ram</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Veloso</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Weld</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Wilkins</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">AIPS-98 planning committee</title>
				<imprint>
			<date type="published" when="1998">1998</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Efficient unsupervised temporal segmentation of motion data</title>
		<author>
			<persName><forename type="first">B</forename><surname>Krüger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Vögele</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Willig</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Yao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Klein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Weber</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Multimedia</title>
		<imprint>
			<biblScope unit="volume">19</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="797" to="812" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Learning actions from observations</title>
		<author>
			<persName><forename type="first">V</forename><surname>Kruger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">L</forename><surname>Herzog</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Baby</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Ude</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Kragic</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Robotics Automation Magazine</title>
		<imprint>
			<biblScope unit="volume">17</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="30" to="43" />
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">An unsupervised framework for action recognition using actemes</title>
		<author>
			<persName><forename type="first">K</forename><surname>Kulkarni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Boyer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Horaud</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Kale</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Computer Vision -ACCV 2010</title>
				<editor>
			<persName><forename type="first">R</forename><surname>Kimmel</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><surname>Klette</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Sugimoto</surname></persName>
		</editor>
		<meeting><address><addrLine>Berlin Heidelberg</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Unsupervised learning of action primitives</title>
		<author>
			<persName><forename type="first">V</forename><surname>Sanmohan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Krüger</surname></persName>
		</author>
		<author>
			<persName><surname>Kragic</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">10th IEEE-RAS International Conference on Humanoid Robots</title>
				<imprint>
			<date type="published" when="2010">2010. 2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<author>
			<persName><forename type="first">C</forename><surname>Schlenoff</surname></persName>
		</author>
		<title level="m">Déduction d&apos;intentions au travers de la représentation d&apos;états au sein des milieux coopératifs entre homme et robot</title>
				<meeting><address><addrLine>Dijon, France</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2014">2014</date>
		</imprint>
		<respStmt>
			<orgName>University of Burgundy</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">PhD thesis</note>
	<note>Inferring intentions through state representations in cooperative human-robot environments</note>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">An IEEE standard ontology for robotics and automation</title>
		<author>
			<persName><forename type="first">C</forename><surname>Schlenoff</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Prestes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Madhavan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Goncalves</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Balakirsky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Kramer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Miguelanez</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="1337" to="1342" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Combined task and motion planning through an extensible plannerindependent interface layer</title>
		<author>
			<persName><forename type="first">S</forename><surname>Srivastava</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Fang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Riano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Chitnis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Russell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Abbeel</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE International Conference on Robotics and Automation (ICRA)</title>
				<imprint>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Spatio-temporal structure of human motion primitives and its application to motion prediction</title>
		<author>
			<persName><forename type="first">W</forename><surname>Takano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Imagawa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Nakamura</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Robotics and Autonomous Systems</title>
		<imprint>
			<biblScope unit="volume">75</biblScope>
			<biblScope unit="page" from="288" to="296" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">A unified representation for reasoning about robot actions, processes, and their effects on objects</title>
		<author>
			<persName><forename type="first">M</forename><surname>Tenorth</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Beetz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)</title>
				<meeting><address><addrLine>Vilamoura, Portugal, October</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="7" to="12" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">KnowRob -A Knowledge Processing Infrastructure for Cognitionenabled Robots</title>
		<author>
			<persName><forename type="first">M</forename><surname>Tenorth</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Beetz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Int. Journal of Robotics Research</title>
		<imprint>
			<biblScope unit="volume">32</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="566" to="590" />
			<date type="published" when="2013-04">April 2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Episodic and semantic memory 1. Organization of Memory</title>
		<author>
			<persName><forename type="first">E</forename><surname>Tulving</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Academic</title>
		<imprint>
			<biblScope unit="volume">381</biblScope>
			<biblScope unit="issue">e402</biblScope>
			<biblScope unit="page">4</biblScope>
			<date type="published" when="1972">1972</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Touch based POMDP manipulation via sequential submodular optimization</title>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">A</forename><surname>Vien</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Toussaint</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">15th IEEE-RAS International Conference on Humanoid Robots, Humanoids 2015</title>
				<meeting><address><addrLine>Seoul, South Korea</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2015">November 3-5, 2015. 2015</date>
			<biblScope unit="page" from="407" to="413" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Hierarchical aligned cluster analysis for temporal clustering of human motion</title>
		<author>
			<persName><forename type="first">F</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">D L</forename><surname>Torre</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">K</forename><surname>Hodgins</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Pattern Analysis and Machine Intelligence</title>
		<imprint>
			<biblScope unit="volume">35</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="582" to="596" />
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
