<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Hybrid remote expert -an emerging pattern of industrial remote support</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Ethan</forename><surname>Hadar</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Zafed Academic College</orgName>
								<address>
									<settlement>Zafed</settlement>
									<country key="IL">Israel</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="institution">IBM Research Labs</orgName>
								<address>
									<settlement>Haifa</settlement>
									<country key="IL">Israel</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Joseph</forename><surname>Shtok</surname></persName>
							<email>josephs@il.ibm.com</email>
							<affiliation key="aff1">
								<orgName type="institution">IBM Research Labs</orgName>
								<address>
									<settlement>Haifa</settlement>
									<country key="IL">Israel</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Benjamin</forename><surname>Cohen</surname></persName>
							<email>cohen@il.ibm.com</email>
							<affiliation key="aff1">
								<orgName type="institution">IBM Research Labs</orgName>
								<address>
									<settlement>Haifa</settlement>
									<country key="IL">Israel</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Yochay</forename><surname>Tzur</surname></persName>
							<email>yochayt@il.ibm.com</email>
							<affiliation key="aff1">
								<orgName type="institution">IBM Research Labs</orgName>
								<address>
									<settlement>Haifa</settlement>
									<country key="IL">Israel</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Leonid</forename><surname>Karlinsky</surname></persName>
							<email>leonidka@il.ibm.com</email>
							<affiliation key="aff1">
								<orgName type="institution">IBM Research Labs</orgName>
								<address>
									<settlement>Haifa</settlement>
									<country key="IL">Israel</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Hybrid remote expert -an emerging pattern of industrial remote support</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">51C1B8BE5A6565DB0AC04262DEE3AA2A</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T06:10+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Augmented Reality</term>
					<term>Context Awareness</term>
					<term>Remote Support</term>
					<term>Context and customer needs</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>One of today's challenges in the Industrial domain is to support workers decision-making and comprehension of the situation by enhancing and extending workers vision. This paper examines a pattern of industrial needs and related solutions for technical support and information-based guidance. The information is provided to field technicians in order to repair and maintain equipment on site, using Augmented Reality (AR) applications running on smart glasses or tablets. The pattern suggests three main interconnected modes of consuming contextual technical information with Augmented Reality: (1) assisted by pre-recorded augmented information, (2) guided by a remote human expert, or (3) supported by an autonomous cognitive system. The pattern, emerged by harvesting our field experience and a literature review, is termed as a hybrid remote expert; in the paper, we describe how it is observed in the industry and discuss the business uses.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>The applicability of Augmented Reality (AR) in business and industrial setting is gaining momentum in aspects such as ubiquitous computing, Internet of Things (IoT), and Artificial Intelligence (AI) interaction <ref type="bibr" target="#b0">[1]</ref> <ref type="bibr" target="#b1">[2]</ref>. The AR technology enables a person to perceive an additional layer of visual information, in which the entities are spatially and contextually correlated to real-world objects. The technological requirements include real-time object recognition and tracking in 3D space over time. Using mobile devices or smart glasses for immersive interaction with augmented information enables a myriad of new or more efficient industrial and business applications. These applications focus on presenting and interacting with just-in-time information aligned with the situational context, in order to support people's decision-making and comprehension of the environment. One of the leading applications in the industrial domain is a Remote Assistant or Advisor for field technicians, establishing a collaboration with an expert in the back-office. The remote expert is able to produce specific instructions that are sent to technician's see-through device and are anchored to the real surrounding objects for X. Franch, J. Ralyté, R. Matulevičius, C. Salinesi, and R. Wieringa (Eds.): CAiSE 2017 Forum and Doctoral Consortium Papers, pp. 33-40, 2017.</p><p>Copyright 2017 for this paper by its authors. Copying permitted for private and academic purposes. usage during motion. At present, most AR applications are hosted by the handheld tablets, but there is a strong gradient towards using smart AR glasses and helmets intended to free user's hands while enlarging the augmented field of view.</p><p>The remote advisor can be implemented in different ways <ref type="bibr" target="#b2">[3,</ref><ref type="bibr" target="#b3">4,</ref><ref type="bibr" target="#b4">5,</ref><ref type="bibr" target="#b5">6</ref>], yet with similar usage and overall business goals. In particular, the aim is to:</p><p> Reduce verbal confusion with graphical and location-based instructions.  Save time and increase collaboration efficiency.  Reduce experts' expenses for traveling to remote sites.  Enable a single remote expert to support many concurrent field technicians.  Increase field operations quality.  Harvest and automate capturing tacit knowledge of the aging workforce.</p><p>During the collaboration, the remote expert, located in the office, receives a visual feed from the field person. When requested, the remote expert can guide and provide textual or verbal information, including tagging augmentation points and associated information on-the-fly. This information is displayed on the device while been anchored to real world objects. In case when the remote location has not been 3D-scanned in advance, the field person needs to model the 3D scene on-the-fly or send to the remote expert some scene videos for remote 3D modeling. Consequently, the remote person can create a 3D model, annotate, create a procedure, and send it back to the field person.</p><p>Companies today employ remote assistance based on cognitive technology. Examples are Siri from Apple <ref type="bibr" target="#b6">[7]</ref> for social interaction, Alexa from Amazon <ref type="bibr" target="#b7">[8]</ref> for controlling home appliance or shop online, Watson Cognitive Computing from IBM <ref type="bibr" target="#b8">[9]</ref> that enables the creation of any cognitive assistant such as for sales force, OnStar Go from General Motors <ref type="bibr" target="#b9">[10]</ref>, which is a personal assistance for automating driving actions created on IBM Watson Cognitive platform <ref type="bibr" target="#b10">[11]</ref>.</p><p>Examination of the above approaches for services driven directly by the user or by an assisting person, as well as requests from companies for services driven by cognitive Artificial Intelligence (AI), has yielded this study of an emergent repeating pattern for Remote Assistant, whether person-, self-, or cognitive-driven. This paper survey technologies, research papers, field experience and customer needs related to the pattern.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Background</head><p>The AR supported Hybrid Remote Expert (HRE) pattern presented in this paper was identified through discussions and actual Proof-of-Concept (PoC) studies conducted with large commercial industrial partners. In addition, a survey of existing AR technology solutions currently available in the market, as well as a review of relevant academic publications, add evidence to the emergence of the pattern. A prime example of an industry looking to exploit AR for improved maintenance and repair operations is the Oil &amp; Gas industry. It operates large, complex, capital intensive sites, where due to the hazardous nature of the materials being processed, there is a strong need for executing these tasks correctly and safely. Examples of AR based tasks include:</p><p>Procedural task guidance: the field technician carries out a maintenance or repair operation on a piece of equipment. He can use step by step instructions displayed on his AR device, observing graphical annotations on the physical equipment. An example may be "turn this valve one half turn counter-clockwise", where an arrow points to the valve and a graphic showing the turn direction is displayed. During the operation the technician may ask a question or get feedback on his actions. He might ask "What sealant fluid should I use on this flange gasket?". Again, this question might be resolved by the remote cognitive expert assistant, and if no answer can be found with a high enough level of confidence (or the system is not trained to support this task), the question will be passed on to a human remote expert. The human remote expert can see what the field technician sees (via the AR device camera) and talk naturally with field technician. The remote human expert can point to things in the field technicians' environment and the field technician will see this "remote pointing" as AR annotations attached to the physical equipment. The dialog and items pointed to during the field technician and remote human expert interaction are captured by the cognitive system and can be harvested to create new procedural guidance sequences and answered questions for future use.</p><p>Troubleshooting: the field technician is trying to determine the cause of a problem. The remote cognitive assistant will carry on a dialog with the field technician, guiding him to check the status of components using a pre-defined logic troubleshooting flowchart. At each stage, AR annotations can be used to help the field technician to understand how to check the components, similar to a step by step task guidance. If the problem is not identified using the pre-defined flowchart, then the field technician can connect with a remote human expert who can suggest further items to check until the problem is determined. As in the previous example, the suggestions by the human remote expert are captured and added to the existing knowledge base, creating troubleshooting flow chart for automated guidance the next time it is needed.</p><p>Asset Information: automatic recognition of the type and instance of a device the technician is facing (e.g. a flow meter, valve, pump) and retrieval of related information.</p><p>The above HRE usage examples are typical for all types of Process Manufacturing industrial environments (Chemical, Petroleum, Pharmaceutical, Food, Beverage, Paint, Mining…), where the right expertise is to keep the plant running smoothly.</p><p>Other industries where AR supported HRE patterns are emerging are in component manufacturing and assembly oriented industries such as Automotive, Aerospace, Electronics, Appliances, etc. In these large manufacturing plants, there are maintenance and repair operations similar to the above, that are done on the plant machinery. Yet in addition, AR can be used to aid workers in their everyday assembly jobs. For example:</p><p>1. Complex assembly and inspection guidance: AR supported HRE can be used to help guide workers through new and/or complex assembly/inspection procedures. This is especially relevant for new/reassigned workers. 2. Ad-hoc plant activities: in some cases, workers in the plant are given a task that is not part of the AR supported procedures. When help is needed, the worker can contact a remote human expert who can explain what needs to be done and clarify the instructions by annotating the workers' environment. Other areas where the HRE pattern is applicable to assist workers include: </p><formula xml:id="formula_0"></formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Related Work</head><p>Remote collaboration tools between two employees via video, audio, and basic annotation tools are available from technology and solutions vendors. Examples include Scope AR, which develops a remote assistance platform (the remote-AR tool) and a tool for building self-service step by step instructions for assembly and disassembly of mechanical parts (the Worklink tool) <ref type="bibr" target="#b2">[3]</ref>. The animated instructions are built using CAD-models, intended for AR glasses. A basic collaborative method for single video streaming were proposed in <ref type="bibr" target="#b11">[12]</ref>, including annotation on a single image when instructions are needed to be presented across the sides. As an example of advanced type of information delivered by the human expert, a technology developed at XMReality enables sending mini-videos of expert's hand gestures to be visualized, illustrating for the field worker how a mission is to be conducted <ref type="bibr" target="#b3">[4]</ref>. A work in <ref type="bibr" target="#b12">[13]</ref> is closer to the Virtual Reality domain, offering pre-recorded videos of full body pose and motion of the expert. Other companies such as Re'flect and Inscape also use interactive two-way communication, with features including live video, anchoring and 3D tracking of expert annotations, and pre-recorded animated procedures <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b5">6]</ref>. A technology and interface for superimposing virtual objects, representing relevant mechanical parts, onto worker's reality are developed in works <ref type="bibr" target="#b13">[14,</ref><ref type="bibr" target="#b14">15,</ref><ref type="bibr" target="#b15">16]</ref>. In AR context, the virtual objects are created and manipulated to build automatic AR step by step guides for complex technological processes.</p><p>Most of the applications use the display of the mobile device or AR glasses to augment information. The former occupies worker's hands and the later has issues related to workers comfort. A different approach for the visualization is taken in the IBM Research development of Tele Advisor, a versatile augmented reality tool for remote assistance <ref type="bibr" target="#b16">[17]</ref>. In this project, a robot equipped with camera and pico-projector works side by side with the remote technician, observing remote actions and projecting annotations onto the physical environment.</p><p>Additional example of a tool enabling expert-technician interaction is the system was developed for rail industry in the ManuVAR project <ref type="bibr" target="#b17">[18]</ref> based on the ALVAR library <ref type="bibr" target="#b18">[19]</ref> for VR/AR applications. In addition to communication with the expert, the tool delivers data from diagnostics systems, technical details and safety information.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Forms of AR based interaction</head><p>From the field experience and market study described above, we consider below three modes of user interaction within a Remote Assistant: self-service, person based, and cognitive based.</p><p>In self-service mode, the user of an AR-based system interacts with a fixed deterministic instructions flow on how to handle a task. The AR system identifies objects in user's field-of-view and displays a prescribed augmented item (information icon, name, small document) near the identified object. These augmented items are anchored in 3D space to the identified object. The annotations points are positioned correctly relative to the observer point of view and tracked with his motion, as seen in Figure <ref type="figure" target="#fig_1">1</ref>. In this mode the user requests information related to anchored annotation points. Examples include: retrieving a specification sheet of a pressure valve, displaying a menu of a restaurant, performing a repair with step-by-step guided instructions (see a visualization in Figure <ref type="figure" target="#fig_2">2</ref>), streaming live telemetry of an IoT device, or providing analytics based on measured data. The retrieved information is displayed on either smart see-through glasses, or interlaced with the mobile camera feed, mimicking a see-through user experience. Note that the AR system set up in the self-service mode does not require network connectivity since AR models and augmented information are preloaded on the device. Remote person mode is relevant when the system of 3D model, annotations and instructions is not available for the specific task, such as a custom repair action. the recommended option is to provide support with a remote human expert and augmented reality. If the object of interest has a 3D model, the remote expert can produce the annotations online, enabling the technician to visualize and follow his instructions. Additional modes of interaction are exemplified in the Remote Expert demos sited in the Prior Art section. When dealing with uncommon operations in a non-modeled environment, the 3D model can be created on-premises by online (SLAM) or offline (SfM) process, and annotated by the remote expert.</p><p>Cognitive situational context mode. The basic self-service mode with stateless query processing flow can be enhanced by interaction with an cognitive assistant system which can comprehend the situational context. Specifically, the cognitive system behavior is dependent on the history of users' interaction with the system, as well as user's interaction with recognized objects. When the context is maintained, user requests can take a form of a dialogue; when a field person asks for the temperature reading of a pipe from the remote expert, the provided answer is "90 degrees", and then the next question can be "and what is the flow direction?". In this 3 steps example, the context of the last question was taken from the previous one.</p><p>The information gathered by the system includes a task the field person is working on, former questions, gestures, and related actions, as well as historical data and IoT telemetry. The goal is to assist the field person in decision-making for understanding the environment and resolving problems. The record of user interaction path with the environment enables the system to include the data recorded from IoT objects around the path, even when they are not visible to the field person. The system is capable of processing more complex requests as it learns and accumulates information.</p><p>In a common scenario, a technician performing a repair in a machine room, needs to ensure that the power source of the engine to be repaired is disconnected. If the engine is IoT connected, the technician can point at the machine and ask the cognitive system to "power off this engine". Then the repair procedure can be guided with step by step augmented instructions. It is essential that the system kept track of user's actions in this case. Once done, the technician asks the system to reconnect the power and run a test.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Hybrid Remote Expert -emerging pattern</head><p>The emerging pattern of the Hybrid Remote Expert combines the self-service, human expert, and cognitive assistance within a contextual analysis of the work environment. The diagram in Figure <ref type="figure">3</ref> depicts the flow of user interaction.</p><p>At the first step, input from user's device consisting of visual (camera) and audio (speech) information is processed by object recognition, gesture recognition and Speech-To-Text engines, determining the observed objects, user's pointing or gestures, and user`s verbal request. When the 3D model of the scene is available in advance, also the tracking of the scene and annotations anchored on physical objects are available. Analyzing user's request determines the appropriate selection of the expert mode: selfservice, human expert or a cognitive assistant. Queries for specific documents, specification sheets, and pre-recorded guidance, are available through the self-service route. More complex queries requiring context analysis, real-time operational information driven from telemetry or analytics sources, and more, are routed to the cognitive expert service. Finally, if an automatic solution is not available, human expert is invoked.</p><p>A possible realization of the cognitive expert mode is based on services of IBM Watson Cognitive platform. When activated, the cognitive system analyzes the situation using the categories and logic provided by the human experts, based on observed objects, current requests and history of conversation. The provided cognitive response ranges from answering questions regarding the current state and IoT telemetry, through retrieval of expert knowledge pertinent to the situation, to step-by-step procedures for standard maintenance and troubleshooting operations.</p><p>The human back-office expert intervenes when necessary, assisting the technician by a live video chat, producing on-the-fly annotations and hand drawings for the observed scene, and anchoring annotations to physical locations. The activity is logged by in order to maintain the situation context and to be used in future interaction. Fig. <ref type="figure">3</ref>. Flow chart describing system components and the flow of user interaction with the hybrid remote expert.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6">Afterword</head><p>Business value of Augmented Reality is evident in terms of saved time of field workers, reduced cost of experts, global reach for market penetration, as well as reduction in penalty or business loss due to slow operations by non-experienced workers. These values are repetitive across industrial domains, and the needs are resolved similarly. This similar approach for providing a conceptual solution with different implementations either driven by people, self-service, or cognitive assistant, is recognized as the emerging pattern of a Hybrid Remote Expert (HRE). Practical solutions implementing HRE should be developed on top of a "build your own AR" cloud middleware, so that business units and services can create their own valuable solutions, using the variety of technologies such as hand/pointing/gesture recognition, and a powerful text analysis engine. These technologies enable higher-level tasks like guiding and monitoring user actions, thus assisting the user in more complex scenarios.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head></head><label></label><figDesc>Transportation maintenance &amp; repair: aircraft, trains, cars, ships,…  Energy &amp; utilities: power plants, utility grids, transformer stations,…  Construction: matching plans to what needs to be built, inspection, …</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Fig. 1 .</head><label>1</label><figDesc>Fig. 1. Visualization of the scene-rooted AR annotations in technical environment, given as arrows and yellow dots with textual labels.</figDesc><graphic coords="5,125.00,303.40,345.30,65.80" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Fig. 2 .</head><label>2</label><figDesc>Fig. 2. Snapshots from a guided step-by-step procedure for detecting and repairing an electronic board fault.</figDesc><graphic coords="5,127.22,589.40,340.72,63.30" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0"><head></head><label></label><figDesc></figDesc><graphic coords="7,124.92,207.40,345.45,252.35" type="bitmap" /></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">On Augmented Reality</title>
		<author>
			<persName><forename type="first">T</forename><surname>Cook</surname></persName>
		</author>
		<ptr target="http://www.businessinsider.com/ap-ple-ceo-tim-cook-explains-augmented-reality-2016-10" />
	</analytic>
	<monogr>
		<title level="m">Businessinsider</title>
				<imprint>
			<date type="published" when="2016-12-04">Dec 4 2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<author>
			<persName><forename type="first">N</forename><surname>Heath</surname></persName>
		</author>
		<ptr target="http://www.zdnet.com/article/five-ways-augmented-reality-will-transform-your-business/" />
		<title level="m">Five ways augmented reality will transform your business</title>
				<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
	<note type="report_type">TechRepublic</note>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<ptr target="http://www.scopear.com/products" />
		<title level="m">Scope AR company, RemoteAR and Worklink collaboration tools</title>
				<imprint>
			<date type="published" when="2016-12-04">Dec 4 2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<ptr target="https://www.youtube.com/watch?v=tmp5KNCn0RE" />
		<title level="m">Remote guidance demo</title>
				<imprint>
			<date type="published" when="2016-12-04">Dec 4 2016</date>
		</imprint>
		<respStmt>
			<orgName>XMReality company</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<ptr target="https://www.re-flekt.com/industrial/" />
		<title level="m">Re`flect company, remote assistance demo</title>
				<imprint>
			<date type="published" when="2016-12-04">Dec 4 2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<ptr target="https://www.youtube.com/watch?v=M1mKffgNK98" />
		<title level="m">Inscape company: remote assistance tool demo</title>
				<imprint>
			<date type="published" when="2016-12-04">Dec 4 2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<ptr target="http://www.apple.com/ios/8/" />
		<title level="m">Siri Assistant</title>
				<imprint>
			<date type="published" when="2016-12-04">Dec 4, 2016</date>
		</imprint>
		<respStmt>
			<orgName>Apple company</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<ptr target="http://9.amazon.com/spa/index.html" />
		<title level="m">Alexa assistant</title>
				<imprint>
			<publisher>Amazon company</publisher>
			<date type="published" when="2016-12-04">Dec 4 2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<ptr target="http://www-01.ibm.com/common/ssi/cgi-bin/ssialias?htmlfid=GBJ03056USEN" />
		<title level="m">Watson Cognitive Assistant</title>
				<imprint>
			<date type="published" when="2016-12-04">Dec 4 2016</date>
		</imprint>
		<respStmt>
			<orgName>IBM company</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<author>
			<persName><forename type="first">General</forename><surname>Motors</surname></persName>
		</author>
		<ptr target="https://techcrunch.com/2016/10/26/gm-puts-ibm-watson-in-cars-with-the-new-11-go-platform/" />
		<title level="m">OnStar Go platform</title>
				<imprint>
			<date type="published" when="2016-12-04">Dec 4 2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<ptr target="https://www-03.ibm.com/press/us/en/pressrelease/50838.wss" />
		<title level="m">IBM Watson providing platform for the OnStar</title>
				<imprint>
			<date type="published" when="2016-12-04">Dec 4 2016</date>
		</imprint>
		<respStmt>
			<orgName>IBM Company</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Spatial workspace collaboration: a SharedView video support system for remote collaboration capability</title>
		<author>
			<persName><forename type="first">H</forename><surname>Kuzuoka</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the SIGCHI conference on Human factors in computing systems</title>
				<meeting>the SIGCHI conference on Human factors in computing systems</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="1992-06">June 1992</date>
			<biblScope unit="page" from="533" to="540" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Immersive 3d environment for remote collaboration and training of physical activities</title>
		<author>
			<persName><forename type="first">G</forename><surname>Kurillo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Bajcsy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Nahrsted</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Kreylos</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE Virtual Reality Conference</title>
				<imprint>
			<date type="published" when="2008-03">2008. March 2008</date>
			<biblScope unit="page" from="269" to="270" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Augmented Reality (AR) Applications for Supporting Human-robot Interactive Cooperation</title>
		<author>
			<persName><forename type="first">G</forename><surname>Michalos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Karagiannis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Makris</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ö</forename><surname>Tokçalar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Chryssolouris</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Procedia CIRP</title>
		<imprint>
			<biblScope unit="volume">41</biblScope>
			<biblScope unit="page" from="370" to="375" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Virtual Replicas for Remote Assistance in Virtual and Augmented Reality</title>
		<author>
			<persName><forename type="first">O</forename><surname>Oda</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Elvezio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Sukan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Feiner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Tversky</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 28th Annual ACM Symposium on User Interface Software &amp; Technology</title>
				<meeting>the 28th Annual ACM Symposium on User Interface Software &amp; Technology</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2015-11">Nov 2015</date>
			<biblScope unit="page" from="405" to="415" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">J</forename><surname>Henderson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">K</forename><surname>Feiner</surname></persName>
		</author>
		<title level="m">Augmented reality for maintenance and repair (ARMAR)</title>
				<imprint>
			<publisher>COLUMBIA UNIV NEW YORK DEPT OF COMPUTER SCIENCE</publisher>
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
	<note type="report_type">Report</note>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">TeleAdvisor: a versatile augmented reality tool for remote assistance</title>
		<author>
			<persName><forename type="first">P</forename><surname>Gurevich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Lanir</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Cohen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Stone</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the SIGCHI Conference on Human Factors in Computing Systems</title>
				<meeting>the SIGCHI Conference on Human Factors in Computing Systems</meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2012-05">May 2012</date>
			<biblScope unit="page" from="619" to="622" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Remote support for maintenance tasks by the use of Augmented Reality: the 24 project</title>
		<author>
			<persName><forename type="first">J</forename><surname>Azpiazu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Siltanen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Multanen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Mäkiranta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Barrena</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Diez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Agirre</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Smith</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Congress on Virtual Reality Applications CARVI</title>
				<imprint>
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<monogr>
		<author>
			<persName><surname>Alvar</surname></persName>
		</author>
		<ptr target="http://www.vtt.fi/multimedia/al-var.html" />
		<title level="m">A Library for Virtual and Augmented Reality</title>
				<imprint>
			<date type="published" when="2016-12-04">Dec 4, 2016</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
