<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Intention Recognition and Communication for Human-Robot Collaboration</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Hadi</forename><surname>Banaee</surname></persName>
							<email>hadi.banaee@oru.se</email>
							<affiliation key="aff0">
								<orgName type="laboratory">Centre for Applied Autonomous Sensor Systems (AASS)</orgName>
								<orgName type="institution">Örebro University</orgName>
								<address>
									<addrLine>Fakultetsgatan 1</addrLine>
									<postCode>701 82</postCode>
									<settlement>Örebro</settlement>
									<country key="SE">Sweden</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Franziska</forename><surname>Klügl</surname></persName>
							<email>franziska.klugl@oru.se</email>
							<affiliation key="aff0">
								<orgName type="laboratory">Centre for Applied Autonomous Sensor Systems (AASS)</orgName>
								<orgName type="institution">Örebro University</orgName>
								<address>
									<addrLine>Fakultetsgatan 1</addrLine>
									<postCode>701 82</postCode>
									<settlement>Örebro</settlement>
									<country key="SE">Sweden</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Fjollë</forename><surname>Novakazi</surname></persName>
							<email>fjolle.novakazi@oru.se</email>
							<affiliation key="aff0">
								<orgName type="laboratory">Centre for Applied Autonomous Sensor Systems (AASS)</orgName>
								<orgName type="institution">Örebro University</orgName>
								<address>
									<addrLine>Fakultetsgatan 1</addrLine>
									<postCode>701 82</postCode>
									<settlement>Örebro</settlement>
									<country key="SE">Sweden</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Stephanie</forename><surname>Lowry</surname></persName>
							<email>stephanie.lowry@oru.se</email>
							<affiliation key="aff0">
								<orgName type="laboratory">Centre for Applied Autonomous Sensor Systems (AASS)</orgName>
								<orgName type="institution">Örebro University</orgName>
								<address>
									<addrLine>Fakultetsgatan 1</addrLine>
									<postCode>701 82</postCode>
									<settlement>Örebro</settlement>
									<country key="SE">Sweden</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Intention Recognition and Communication for Human-Robot Collaboration</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">07DE42450A6455F91CBC238E831D71BC</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T17:12+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Intention recognition, intention granularity, human-robot collaboration, human-robot communication (S. Lowry) 0000-0002-9607-9504 (H. Banaee)</term>
					<term>0000-0002-1470-6288 (F. Klügl)</term>
					<term>0000-0001-6381-2346 (F. Novakazi)</term>
					<term>0000-0003-3788-499X (S. Lowry)</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Human-robot collaboration follows rigid processes, in order to ensure safe interactions. In case of deviations from predetermined tasks, typically, processes come to a halt. This position paper proposes a conceptual framework for intention recognition and communication, enabling a higher granularity of understanding of intentions to facilitate more efficient and safe human-robot collaboration, especially in events of deviations from expected behaviour.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>The promise of Industry 4.0 is a paradigm shift towards interconnected manufacturing systems that leverage advanced technologies such as artificial intelligence and automated technologies to optimise production processes. As removing humans from manufacturing is not a viable option <ref type="bibr" target="#b0">[1]</ref>, the focus is instead on finding ways to integrate humans and machines to work collaboratively and efficiently. The reason for the development of mixed human-robot teams in various application areas, such as assembly and transportation, is to combine the flexibility, adaptability, and problem-solving skills of humans with the precision and efficiency of robots. In the past, this has been accomplished by organising activities in highly controlled environments where robots and humans are kept apart by enforcing safeguards, such as maintaining spatial or temporal distances between them and assigning tasks or objectives to each agent, whether they are human or robotic.</p><p>For a successful transition into an industry 4.0 and subsequently 5.0 setting, robots need to have the ability to coexist and interact with humans in both physical and social settings. This entails creating a safe environment for humans where they can perceive, interpret, and respond to the actions and intentions of robots, and vice versa. This, though, becomes a challenge in the event of deviations from the established processes or assigned tasks. While a simple intention recognition (IR) approach places the responsibility exclusively on the robotic agent to detect actions and adapt their behaviour accordingly, a more effective approach would integrate a hierarchical IR with communication strategies to detect and clarify the reasons for deviations. This would allow the sharing of responsibilities to ensure a seamless continuation of teamwork and ultimately, productivity in the evolving industry landscape.</p><p>To achieve this goal, we argue that there is a need to identify the appropriate level of granularity, sequence, and abstraction of tasks as intentions for an IR and communication framework in human-robot teams. Therefore, this position paper explores alternative solutions driven by an intention recognition approach to managing a human-robot collaboration (HRC), or mixed-agent teams, to enable more effective and seamless interactions between humans and robots that do not require rigid safety constraints, but instead rely on communication strategies to enable the handling of deviations and enhanced reactions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Related Work</head><p>For HRC to be successful, the robot must analyse and understand human intentions, as well as effectively convey its own goals. Intention refers to the mental state or attitude of aiming to do or achieve something <ref type="bibr" target="#b1">[2]</ref>. It involves a conscious decision or plan to act, often driven by a purpose or goal. Hence, understanding intent involves identifying the occurring activity, inferring the objectives of the task, and predicting the next actions. In an HRC context, IR refers to the process of identifying the intentions of agents, whether they are human or robotic, by examining their sequences of actions and/or analysing the impact of their actions on the state of the environment <ref type="bibr" target="#b2">[3]</ref>. In other words, IR can be defined as the process of inferring the intentions of an agent by analysing their behaviour <ref type="bibr" target="#b3">[4]</ref>. Hence, in this paper, we define IR as the capacity to identify the particular goal being pursued by precisely discerning the exact course of action that is taken <ref type="bibr" target="#b4">[5]</ref>.</p><p>The current research body presents various methodologies to address this challenge. One of the driving ideas is that robots need to be more proactive in their interactions with humans, which puts IR at the centre of possible solutions. For example, Tong and colleagues <ref type="bibr" target="#b5">[6]</ref> explored a method for proactive human IR based on context changes and triggers, utilising vision-based technologies. However, a single feature reliance can make such a system unreliable. Other research looked at how two-way IR and communication affect HRC to figure out what people are going to do, like pick up an object, and help team members work together better <ref type="bibr" target="#b6">[7]</ref>. In yet another attempt to explore task sequences as a source for IR, the trajectory of human actions was analysed to predict the subsequent actions <ref type="bibr" target="#b7">[8]</ref>, thus breaking down intentions into smaller actions. A different study utilised an inverse planning approach to IR, putting forward a logic-based approach for fully observable systems. This approach inferred the human's goal by observing a sequence of actions and still allowed for small deviations in the sequence where the human perform their actions in a different order <ref type="bibr" target="#b8">[9]</ref>.</p><p>These approaches lack addressing granularity to recognise intermediate intentions hierarchically. Moreover, deviations are merely studied by moving from a limited set of actions to a single intention as the overall goal. This creates a challenge for a complex HRC since inferring only the final goal as an intention from the atomic actions might not lead to enhanced decision-making.</p><p>Typically, in HRC, communication (whether explicit or implicit) is often used to convey an intention from the robot to the human <ref type="bibr" target="#b9">[10]</ref>. However, the framework presented here aims to facilitate purposeful communication in the event of a deviation. This allows the robot to adjust its decision-making and subsequent reaction when interacting with a human agent.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Conceptualisation of IR Framework</head><p>In the context of mixed human-robot teams, intention recognition is crucial to achieving two main challenges: 1) safety enhancement by avoiding potential harm or hazards, and 2) efficiency by providing proper support and assistance from the agents. Our conceptual framework for intention recognition and communication addresses these challenges by focusing on aspects of the appropriate level of granularity, sequentiality of actions, and deviation in performing the tasks. To formalise the conceptual framework, first, we clarify the input of the framework in such a context. Then, we specify in which situation the proposed intention recognition framework will be triggered.</p><p>In a human-robot environment, the agents (i.e., humans and/or robots) are asked to fulfill a certain goal by performing a sequence of predetermined tasks. Each of the predetermined tasks might be either at the level of primitive observable actions (e.g., pick a tool) or at a higher level of abstraction (e.g., repair the machine). The agents should have a shared understanding of the tasks and the overall team goal. However, situations may arise when an agent deviates from the shared plan. In such a situation, the other agents need to recognise the intentions behind the deviation and react accordingly, to avoid unnecessary interruptions to the workflow.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Conceptual Model</head><p>Our proposed conceptual model takes into account three following aspects:</p><p>Temporal sequence of actions: An intermediate intention may not be directly observable by a single action. Therefore it is important to consider the sequence of actions to infer the intermediate intention behind them. For example, in the kitting task below, a robot follows the sequence of the move-pick-move-place actions to fulfill the intermediate intention of "collecting one item".</p><p>Granularity of intentions in a hierarchical structure: One can consider primitive actions as the first level of intentions. The combination of these intentions can then lead to inferring a higher abstraction level of intermediate intentions. But these intermediate intentions can also be combined to infer even higher levels of abstractions as intermediate intentions. The highest level of abstraction will be then the shared intention of the team as the overall goal. Note that the granularity of the intentions is not fixed and can be adjusted based on the context and the requirements of the task.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Deviation from the predetermined tasks:</head><p>In the process of inferring the intermediate intentions toward achieving the goal, the agents should be able to detect a deviation from the shared plan. The deviation is either detected by observing an unexpected sequence of primitive actions or is identified after inference of intentions, i.e., as a deviation in the intermediate intentions.</p><p>Figure <ref type="figure" target="#fig_0">1</ref> illustrates the conceptual model of the aspects behind the framework, considering the sequence of actions, the hierarchy of intermediate intentions, and detection.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Framework Processes</head><p>Based on the concepts illustrated in Fig <ref type="figure" target="#fig_0">1</ref>, the following processes are required for the overall adaptive intention recognition and communication framework.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>1) Observation and Context Analysis:</head><p>The agents follow the shared tasks and observe the primitive actions of the other agents, considering the context and the goal of the task. These observable actions will be the ingredients of the IR process.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>2) Intention recognition:</head><p>The sequence of observed actions is analysed to infer the intermediate intentions of the other agents, considering the temporality of the actions and the granularity of the lower-level intentions. These inferences can be made in higher levels of intention abstractions until leading to meaningful decision-making.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>3) Deviation Detection:</head><p>The agents recognise the deviation from the shared tasks by either observing the sequence of actions performed by the other agent, or at a higher level of abstraction, by recognising the change in the intermediate intentions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>4) Adaptation &amp; Reaction:</head><p>Based on the deviation detection, and the expected intermediate intentions based on the shared tasks, the agents can make a decision based on the higher-level abstract intention behind the deviation and react accordingly to the situation. The proper reaction depends on the level of abstraction of the intention and the context of the task. This component can either lead to a direct execution of the reaction or may infer further adaptation based on the complexity of the situation, which leads to the next step (i.e., communication).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Communication Intention Recognition</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Adaptation &amp; Reaction Observation &amp; Context Analysis</head><p>Deviation Detection</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Figure 2:</head><p>The components of the framework for intention recognition and communication.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>5) Communication:</head><p>In most cases, before deciding whether a reaction is proper to be executed, further adaptions are needed. To do so, the agents communicate the reasons behind the deviation, to ensure a shared understanding of the task and the goal. This component along with others can be seen as a cycle, as the new adaptation driven by the performed communication can lead to further iterations of intention recognition.</p><p>These components as the main building blocks of the proposed framework create a loop of intention recognition and communication, which ensures the shared understanding of the task and the goal, and also the safety and efficiency of the agents in a mixed human-robot environment. Figure <ref type="figure">2</ref> illustrates the components of the proposed framework and the loop of intention recognition and communication.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Illustrative Scenario: Large Sale Kitting Task</head><p>To demonstrate the proposed framework, we consider a large-scale kitting task in an industrial environment. The overall goal of the kitting task is to collect items from storage racks around a kitting area and place them on a central table. Figure <ref type="figure">3</ref> illustrates a simple representation of the kitting task scenario with a human and a robot in the environment.</p><p>In this scenario, we assume there are two agents -a human and a robot -working collaboratively to complete this task. Each agent has access to its designated half of the storage racks and the kitting table, and the predetermined plan they are following ensures that -to minimize collisions and disruptions -they should not enter each other's designated zones.</p><p>The agents have a shared understanding of the tasks and the goal and they can observe the actions of each other within the environment. The robot is equipped with the proposed intention recognition framework: it can observe human actions, infer human intentions, detect deviations in various levels of abstraction, and react accordingly.</p><p>When a deviation from the shared tasks is detected, the robot adapts its reactions based on the inferred intentions and communicates with the human if necessary. The robot then can make a decision based on the higher abstract intention behind the deviation and react accordingly to the situation. We emphasise on the differences between the possible reactions: 1) intrinsic reaction where the robot directly reacts to the deviation by only considering the observed actions, or 2) Figure <ref type="figure">3</ref>: Illustration of the kitting task with a human and a robot in the environment. The goal is to retrieve items from the storage boxes around the edges of the kitting area and place them on the kitting table (top centre). Under the predetermined plan, the human should only collect items from the pink boxes on the left-hand side and the robot should collect items from the blue boxes on the right-hand side. However, sometimes deviations may occur from the plan, and the human may cross the central dividing line to enter the robot's work area.</p><p>enhanced reaction where the robot infers the intentions and communicates with the human.</p><p>We demonstrate two examples of deviations that may occur during the execution of the kitting task and discuss how the proposed framework can be applied to handle these situations with enhanced reactions based on inferred intentions and communication in comparison with the intrinsic reaction.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Scenario 1:</head><p>The robot fails to pick up an item because the items are placed in the box in an unfortunate way. The intrinsic reaction of the robot is to give an alarm signal and then wait for the human to come and help pick up the difficult item. According to the intrinsic reaction, the robot must remain stationary while the human is within the robot's designated zone to avoid any possible collisions.</p><p>However, an enhanced reaction is for the robot, once it has informed the human of the picking failure, to deviate from the predetermined plan and continue to collect other items from the rack while the human retrieves the difficult item. The robot will re-plan to ensure that there is no collision with the human -that is, it must only collect items that do not interfere with the human's ability to retrieve the difficult item. The IR framework will facilitate the robot to recognise that the human has deviated from the predetermined plan to help the robot retrieve the difficult item, and allow the robot to communicate its own new intentions while the human is within the robot's designated zone. This enhanced reaction can potentially avoid the collision and improve the efficiency of the task.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Scenario 2:</head><p>The robot has lost an item on its way to the kitting table, and the human crosses into the robot's designated zone to help retrieve the item. The intrinsic reaction of the robot is to stop to avoid collisions with humans. An enhanced reaction could be to recognise that the human comes to help and communicate about what to do with the lost item and/or who brings it to the table. This enhanced reaction can potentially improve the efficiency of the task by resolving the ambiguities in the plan execution.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">The Role of Communication for IR</head><p>In our proposed framework, the inferences driven by IR enable humans and robots to adapt and react safely and appropriately to deviations in a predetermined workflow. When deviations occur, humans and robots need to communicate and negotiate to enable reasonable decisionmaking, to react appropriately, and to allow the continuation of the workflow in a safe and efficient manner. The proposed framework introduces a hierarchy to inferring the intentions, by introducing semantically meaningful knowledge on a higher level, which enables more precise communication on any of the intermediate intention levels. It is a tool to reduce ambiguity, and consequently, uncertainty in the intention recognition framework.</p><p>Communication is an essential component of the intention recognition -adaptation/reaction -communication cycle. Through more purposeful understanding and recognition of each other's intentions, empowering more expressive communication strategies, the team becomes more of a peer-to-peer system, assisting each other to clarify the inferred intentions, aligning the team's mental model, thereby enhancing the process instead of stopping operations in case of deviations.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Conceptual model of the aspects behind the framework.</figDesc></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>This work was supported by the Swedish Knowledge Foundation in the TeamRob Synergy Project (contract number 20210016).</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Foundation for a classification of collaboration levels for human-robot cooperation in manufacturing</title>
		<author>
			<persName><forename type="first">A</forename><surname>Kolbeinsson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Lagerstedt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Lindblom</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Production &amp; Manufacturing Research</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="page" from="448" to="471" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<author>
			<persName><surname>Merriam-Webster</surname></persName>
		</author>
		<ptr target="https://www.merriam-webster.com/dictionary/intention" />
		<title level="m">Intention</title>
				<imprint>
			<date type="published" when="2024-04-16">16 Apr. 2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Logic-based approaches to intention recognition</title>
		<author>
			<persName><forename type="first">F</forename><surname>Sadri</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Handbook of research on ambient intelligence and smart environments: Trends and perspectives</title>
				<imprint>
			<publisher>IGI Global</publisher>
			<date type="published" when="2011">2011</date>
			<biblScope unit="page" from="346" to="375" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Intention recognition with problog</title>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">B</forename><surname>Smith</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Belle</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">P</forename><surname>Petrick</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Frontiers in Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="page">806262</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Communicating agent intentions for human-agent decision making under uncertainty</title>
		<author>
			<persName><forename type="first">J</forename><surname>Porteous</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Lindsay</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Charles</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2023 International Conference on Autonomous Agents and Multiagent Systems, AAMAS &apos;23, International Foundation for Autonomous Agents and Multiagent Systems</title>
				<meeting>the 2023 International Conference on Autonomous Agents and Multiagent Systems, AAMAS &apos;23, International Foundation for Autonomous Agents and Multiagent Systems<address><addrLine>Richland, SC</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="290" to="298" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Context change and triggers for human intention recognition</title>
		<author>
			<persName><forename type="first">T</forename><surname>Tong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Setchi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Hicks</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Procedia Computer Science</title>
		<imprint>
			<biblScope unit="volume">207</biblScope>
			<biblScope unit="page" from="3826" to="3835" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Effects of integrated intent recognition and communication on human-robot collaboration</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">L</forename><surname>Chang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">A</forename><surname>Gutierrez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Khante</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">S</forename><surname>Short</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">L</forename><surname>Thomaz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2018">2018. 2018</date>
			<biblScope unit="page" from="3381" to="3386" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Recurrent neural network for motion trajectory prediction in human-robot collaborative assembly</title>
		<author>
			<persName><forename type="first">J</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Chang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">X</forename><surname>Gao</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">CIRP annals</title>
		<imprint>
			<biblScope unit="volume">69</biblScope>
			<biblScope unit="page" from="9" to="12" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Two ways to make your robot proactive: Reasoning about human intentions or reasoning about possible futures</title>
		<author>
			<persName><forename type="first">S</forename><surname>Buyukgoz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Grosinger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Chetouani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Saffiotti</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Frontiers in Robotics and AI</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="page">929267</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Purposeful communication in human-robot collaboration: A review of modern approaches in manufacturing</title>
		<author>
			<persName><forename type="first">R</forename><surname>Salehzadeh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Gong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Jalili</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Access</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="page" from="129344" to="129361" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
