<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Abstract and Concrete Interaction with Mixed Reality Systems The case of the mini screen, a new interaction device in Computer-Assisted Surgery</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Benoit</forename><surname>Mansoux</surname></persName>
							<email>mansoux@imag.fr</email>
							<affiliation key="aff0">
								<orgName type="laboratory">Laboratoire CLIPS-IMAG BP 53</orgName>
								<address>
									<postCode>38041</postCode>
									<settlement>Grenoble cedex 9</settlement>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Laurence</forename><surname>Nigay</surname></persName>
							<email>nigay@imag.fr</email>
							<affiliation key="aff0">
								<orgName type="laboratory">Laboratoire CLIPS-IMAG BP 53</orgName>
								<address>
									<postCode>38041</postCode>
									<settlement>Grenoble cedex 9</settlement>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Jocelyne</forename><surname>Troccaz</surname></persName>
							<email>jocelyne.troccaz@imag.fr</email>
							<affiliation key="aff1">
								<orgName type="laboratory">Laboratoire TIMC-IMAG I. I. I. S. -Faculté de Médecine</orgName>
								<address>
									<postCode>38706</postCode>
									<settlement>La Tronche cedex</settlement>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Abstract and Concrete Interaction with Mixed Reality Systems The case of the mini screen, a new interaction device in Computer-Assisted Surgery</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">A80F7A15B1448F04B551D51235774F15</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T22:14+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Mixed Reality</term>
					<term>Computer Assisted Surgery</term>
					<term>Design Space</term>
					<term>Interaction Device</term>
					<term>Mini screen</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>In this paper we focus on the design of mixed reality (MR) systems. We propose two design spaces that can be useful in a top-down (abstract to concrete) design method for MR systems. The first design space consists of an organized framework of abstract interaction situations for describing mixed systems. Each situation is depicted by an ASUR diagram and describes the exchange of information between the entities involved in a mixed system. The situations are abstract because they are independent of the interaction modalities (both interaction languages and devices). The abstract interaction situations are illustrated by several Computer-Assisted Surgery (CAS) systems. Such a framework is useful for the designer in order to systematically explore the set of possibilities at an early stage of the interaction design, without being biased by a particular technology. With the interaction situation described, the designer can then focus on the modalities to be used: both passive and active modalities can be elected. This design stage consists of concretizing the interaction situation by selecting the modalities. For this stage of the design, we propose a design space that characterizes the possible usages of one particular innovative interaction device for CAS systems: a mini screen. We illustrate the complementarity of our two design spaces by presenting two CAS systems that embed a mini screen for different purposes in the interaction: one system is based on a localized mini screen fixed on the surgical tool while the other involves the surgeon handling the mini screen on top of the patient's body.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>INTRODUCTION</head><p>In this paper, we focus on the design of Mixed Reality (MR) systems. We focus on MR systems that assist a user in performing a task on a physical object (a class of MR systems called "Augmented Reality" in <ref type="bibr" target="#b1">[2]</ref>). One of our main application domains for such MR systems is Computer Assisted Surgery (CAS), in the context of a multidisciplinary project that involves the HCI and the CAS research groups of the University of Grenoble. The main objective of CAS systems is to help a surgeon in defining and executing an optimal surgical strategy based on a variety of multi-modal data inputs. MR systems play a central role in the CAS domain because the key point of a CAS system is to "augment" the physical world of the surgeon: the operating theater, the patient, the surgical tools etc., by providing pre-operative information during the surgery. MR systems are now entering many surgical specialties and such systems can take on the most varied forms. Although many CAS systems have been developed and provide real clinical improvements, their design is adhoc and principally driven by technologies.</p><p>In this context, our research aims at providing elements useful for the design of usable MR systems by focusing on the interaction between the user and the MR system. We present two design spaces that can be useful in a top-down design method for MR systems. The first design space, presented in the second section of the paper, consists of an organized framework of abstract interaction situations for describing MR systems. This first result is useful at an early stage of the design of MR systems: indeed it enables the designer to systematically explore the set of possibilities without being biased by the available technologies. While this first design space focuses on abstract interaction (i.e., independent of the interaction technologies), our second design space, presented in the third section of the paper, characterizes the possible usages of one particular interaction device, a mini screen. Our two design spaces are therefore complementary and address different stages of a top-down design method of MR systems: abstract versus concrete interaction. Before presenting our two design spaces, we first clarify the two interaction design steps, i.e. the design of the abstract and concrete interaction.</p><p>We call interaction situation, an abstract description of the interaction involved in an MR system. Such a description is independent of the interaction modalities. We define in <ref type="bibr" target="#b6">[7]</ref> a modality as the coupling of a physical device with an interaction language. After describing the interaction situation, the following step in the design consists of concretizing the abstract situation by choosing the modalities: the description of the interaction is then concrete. For describing the abstract and concrete interaction, we use the ASUR notation <ref type="bibr" target="#b1">[2]</ref> <ref type="bibr" target="#b2">[3]</ref>. In the following paragraph, we summarize the main characteristics of the notation. We then describe how to use the ASUR notation for describing the abstract and concrete interaction.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>ASUR notation</head><p>ASUR <ref type="bibr" target="#b1">[2]</ref>[3] stands for "Adapter", "System", "User", "Real objects". In user-centered MR systems are described in terms of entities (A, S, U, R) taking part in the interaction and the relations between those entities. Between the user (U) and the computer system (S), the adapters bridge the gap between the physical world and the digital one. They could be input adapters (Ain) (e.g., a mouse, a localization mechanism) or output ones (Aout) (e.g., a video projector, audio speakers). Physicality is one key feature of MR systems: real objects are involved in the task. Within the ASUR notation we distinguish physical objects that are tools (Rtool) for performing the task, from the ones that are the objects of the task (Robject).</p><p>Three kinds of relationship between two ASUR entities are identified:</p><p>• Exchange of data is represented by an arrowed line between two ASUR entities (AAEB).</p><p>• Physical activity triggering an action: a double-line arrow (AfiB) denotes the fact that when the entity A meets a given spatial constraint with respect to entity B, data will be exchanged along another specified relationship (CAED).</p><p>• Physical collocation is represented by a non-directed double line (A=B). This refers to a persistent physical proximity of two entities.</p><p>Finally, the ASUR entities and relationships are described by a set of characteristics. Table <ref type="table" target="#tab_0">1</ref> presents some of them. For example the first characteristic induced by the use of a real object (R) or an adapter (A) is the human sense involved in perceiving data from such an entity or in performing actions using such entity. The most common used ones are the haptic, visual and auditory senses. A second characteristic is the location where the user has to focus with the required sense, in order to perceive/manipulate the real entity as well as to manipulate the adapter or perceive the data provided by it. In addition one characteristic of a relation between two ASUR entities is the interaction language used to express data carried by the relation. If we refer to our definition of a modality <ref type="bibr" target="#b6">[7]</ref> as the coupling of a physical device with an interaction language, the device is described by an ASUR entity while the interaction language is a characteristic of the relation from this entity (device) to another ASUR entity. In <ref type="bibr" target="#b2">[3]</ref>, we explained how we use the ASUR notation during the requirements definition phase for describing usage scenarios and during the external specification phase for describing the concrete designed interaction. Going one step further, we define here two levels of abstraction in describing the interaction in an MR system, as part of a top-down (abstract to concrete) method for designing the interaction. Interaction situations are described using ASUR at the most abstract level. Nevertheless, for analytical reasons, we describe the two levels of interaction description in reverse order, from the concrete one to the abstract one.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Entities</head><p>(1) The most concrete description is the final stage of the external specification phase. Interaction is fully depicted by a set of ASUR entities and relations that are described by the ASUR characteristics. The interaction modalities (devices and languages) are therefore chosen: we distinguish two types of modalities in an MR system, the active and passive modalities. Active and passive modalities are defined for the MR systems we are concerned with in this paper: the object/target of the main task is physical, for example the patient for CAS systems.</p><p>• For inputs, active modalities are used by the user to issue a command to the computer such as a pedal to move a laparoscope in a CAS system. Passive modalities are used to capture relevant information for enhancing the realization of the task, information that is not explicitly expressed by the user to the computer ("perceptual user interfaces" <ref type="bibr" target="#b8">[9]</ref>). For example, in our CASPER (Computer ASsisted PERicardial puncture) system, presented in Figure <ref type="figure" target="#fig_0">1</ref>, a system for computer assistance in pericardial punctures, a passive modality is used for tracking the position of the puncture needle.</p><p>• For outputs, active modalities, conveying information from the computer to the user, imply that the user explicitly switches attention from her/his current task focus to a new focus in order to perceive the provided information. For example in our CASPER system, visual guidance information during the puncture task is displayed on a screen. While using CASPER (Figure <ref type="figure" target="#fig_0">1</ref>), the surgeon must always shift between looking at the screen and looking at the patient and the needle (i.e., the task environment). As opposed to active modalities, passive output modalities convey information to the user that is integrated in her/his task environment, for example displaying anatomical information onto the patient's body during a surgery. For the case of passive output modalities, the user does not have to switch attention from her/his current task focus in order to perceive the provided information.  The concrete interaction description of Figure <ref type="figure" target="#fig_1">2</ref> is not complete. The ASUR diagram is completed by the characteristics of the identified entities and relations. For example the interaction language (one of the characteristics) used to convey the guidance information on screen (Aout) must be described: Using CASPER, in the same window on screen, the current position and orientation of the needle is represented by two mobile crosses, while one stationary cross represents the planned trajectory. A complete description of the concrete interaction in ASUR can be found in <ref type="bibr" target="#b1">[2]</ref>.</p><p>(2) A more abstract level of description of interaction consists of focusing on the exchange of information between the involved entities during interaction. By doing so, we describe what we call the interaction situation. Interaction modalities are not yet chosen but the elementary tasks are identified. The role of the adapters are therefore defined (for example, a localization mechanism, a data presenter) but the concrete adapters (physical devices) as well as the forms of the data conveyed along the relation are not yet defined. In addition the physical setting is not yet defined, physical relationships between entities are not decided. In conclusion, such level of description consists of an ASUR diagram:</p><p>• without characterization of the entities and relations.</p><p>• with one kind of relation: exchange of data (AAEB). Figure <ref type="figure" target="#fig_2">3</ref> illustrates this level of description using our CASPER system. Figure <ref type="figure" target="#fig_2">3</ref> is therefore a more abstract description of the interaction described in Figure <ref type="figure" target="#fig_1">2</ref>. In a top-down (abstract to concrete) design method, the designer first focuses on the interaction situation (i.e., abstract description of the interaction) and will then select the modalities for concretizing the interaction. Our first design space identifying a set of interaction situations is therefore useful at an early stage of the interaction design for reasoning on the interaction without being biased by the interaction technologies. Our second design space characterizes the possible usages of one particular innovative interaction device (output adapter) for CAS systems: a mini screen. This second design space is therefore useful for designing concrete interactions involving a mini screen.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>INTERACTION SITUATION DESIGN SPACE</head><p>Our design space is made of interaction situations that are independent of the interaction modalities. A situation is dedicated to a particular task. For example, in Figure <ref type="figure" target="#fig_2">3</ref>, the diagram depicts the interaction situation for the task of pericardial puncture while using CASPER. A situation describes both the abstract input and output interaction. Our framework is composed of input and output situations. Our approach for establishing the framework of interaction situations draws from our distinction of active and passive modalities.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Input interaction situations</head><p>For inputs (user to computer), we identify four situations, two of them involve active modalities while the other two involve passive modalities.</p><p>(1) The two situations, Class I-input and Class II-input, involve active modalities. In these situations, the user explicitly issues a command to the computer system. The user must switch attention from the task's focus (Robject) to a new focus in order to interact with the computer. As a consequence, in the ASUR diagram that depicts these two situations, there is no Robject involved. Without Robject, the two remaining possibilities are: Class I-input: UAEAinAES</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Class II-input: UAERtoolAEAinAES</head><p>The first situation (Class I-input) depicts a classical interaction with a computer, for example using a mouse. The second situation (Class II-input) describes the case where the user manipulates a physical object (Rtool) to interact with the computer via an adapter that captures the manipulations. Examples of such input situations are the physical icons that are physical handles to digital objects, "coupling the bits with everyday physical objects and architectural surfaces" <ref type="bibr" target="#b5">[6]</ref>.</p><p>(2) We identify two situations that involve passive modalities. The user is performing a task in the physical world on an Robject while the computer captures relevant information for enhancing the realization of the task, thanks to passive modalities. Two situations are possible whether the user manipulates Robject using a tool ([Rtool, Robject]) or directly manipulates Robject.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Class III-input: UAE[Rtool, Robject]AEAinAES</head><p>Class IV-input: UAERobjectAEAinAES A Class III-input example is the CASPER input situation described in Figure <ref type="figure" target="#fig_2">3</ref>: During the puncture task, the surgeon is handling the puncture needle (Rtool) that touches the patient body ([Rtool, Robject]). Both the needle and the patient are localized by the system via adapters.</p><p>For these two situations that involve passive modalities, we suggest that the user and the object of the task are physically together. In the case of telesurgery for example, the surgeon (user) and the patient (object of the task) are distant. Such situations are described using ASUR by adding an ASUR chain that comprises the computer system (S) between:</p><p>• the user (U) and the tool ([Rtool, Robject]) for Class IIIinput,</p><p>• the user (U) and the object of the task (Robject) for Class IV-input.</p><p>The ASUR chain to be added is either:</p><formula xml:id="formula_0">(a) (AinAESAEAout) (b) (RtoolAEAinAESAEAout)</formula><p>The two ASUR chains differ by the way the user interacts with the computer system (S). The two chains (a) and (b) respectively correspond to Class I-input and Class II-input.</p><p>We therefore obtain four classes: </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>UAE(RtoolAEAinAESAEAout)AERobjectAEAinAES</head><p>For example, the input interaction situation of the telesurgery system described in <ref type="bibr" target="#b4">[5]</ref> belongs to Class IIIinput-b: The surgeon (U) remotely controls a slave robot (Aout), that holds the surgical tools (AoutAE[Rtool, Robject]), by manipulating force-feedback arm-mounted tools (UAERtoolAEAin).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Output interaction situations</head><p>For outputs (computer to user), we identify four situations, two involving active modalities and two involving passive ones. This is the symmetric case of input situations.</p><p>Class I-output and Class II-output correspond to situations involving active modalities. The user must switch attention (explicit action of the user) from her/his current task focus (Robject) to a new focus in order to perceive the provided information carried by the active modalities. The ASUR diagrams of these two situations therefore do not comprise an entity Robject.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Class I-output: SAEAoutAEU</head><p>Class II-output: SAEAoutAERtoolAEU A Class I-output example is the CASPER output situation described in Figure <ref type="figure" target="#fig_2">3</ref>: During the puncture task, the surgeon perceives guidance information displayed on a screen. An example of Class II-output situation would correspond to a CAS system that displays information on the wall of the operating theater: Although a surface of the physical environment is used for displaying information (Rtool), it implies that the surgeon consciously switch attention from the environment of the task (the operating field) to the wall in order to perceive the information.</p><p>(2) As for inputs, two output situations involve passive modalities. These situations describe the cases where the user is perceiving the information provided by the system within her/his task environment (Robject). The ASUR diagrams that describe these two situations therefore involve an Robject.</p><p>Class III-output: SAEAoutAE[Rtool, Robject]AEU Class IV-output: SAEAoutAERobjectAEU</p><p>The output situation using the PADyC (Passive Arm with Dynamic Constraints) system <ref type="bibr" target="#b7">[8]</ref> belongs to Class-IIIoutput. Indeed using PADyC, the surgeon is handling a surgical tool that is linked to a passive arm (Aout). The programmable arm enables us to provide haptic guidance information (touch feedback) to the surgeon while performing the surgery. Another output situation of this class that involves a mini screen will be described in the last section of the paper.</p><p>A Class IV-output example is the situation using the second version of CASPER <ref type="bibr" target="#b1">[2]</ref> that involves a see-through head-mounted display (HMD), instead of a screen as in the first version of CASPER (Figure <ref type="figure" target="#fig_0">1</ref>). Thanks to the HMD, the surgeon directly perceives the guidance information displayed on top of the patient. Another example is the Image Overlay system <ref type="bibr" target="#b0">[1]</ref> presented in Figure <ref type="figure" target="#fig_6">5</ref>. The guidance information is displayed onto a see-through surface located in between the surgeon and the patient's body. Such an interaction situation belongs to Class IVoutput.</p><p>The same reasoning as the one for inputs can be applied for studying the case where the user and the object of the task are distant. The two chains to be added to Class III-output and Class IV-output, in between Robject and U are:</p><formula xml:id="formula_1">(a) (AinAESAEAout) (b) (AinAESAEAoutAERtool)</formula><p>For example:</p><p>Class IV-output-a: SAEAoutAERobjectAE(AinAESAEAout) AEU</p><p>One example of such a situation will be the following one: a telesurgery system displays anatomical information on top of the patient's body (SAEAoutAERobject), while a camera (Ain) facing the patient's body enables the distant surgeon (U) to see on her/his screen (Aout) the image of the patient enhanced by the anatomical information.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Completeness of the situation design space</head><p>For each input as well as output situation, we described all the combination possibilities of ASUR entities, making the design space complete. Nevertheless for each situation the described ASUR chain is the minimal one. While concretizing the abstract situation, some ASUR entities may be inserted in the minimal chain.</p><p>The completeness of the framework makes it a useful tool for the designer to systematically explore the set of possibilities at an early stage of the interaction design, without being biased by a particular technology. With the interaction situation described, the designer can then focus on the modalities (device and language) that are passive or active according to the situation, as well as on the physical setting (physical relations described in ASUR). From an abstract interaction situation, several concrete interaction solutions can be designed. In the following paragraph, we focus on concrete interaction involving a particular device: a mini screen.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>CONCRETE INTERACTION INVOLVING A M I N I SCREEN</head><p>The transition from interaction situation to concrete interaction is difficult because the set of possibilities in terms of modalities (device and language) is huge. As a first step for accompanying this transition, we propose a design space that describes the possible modalities that involve a mini screen. Small devices are increasingly being used in MR systems as in <ref type="bibr" target="#b9">[10]</ref>, and offer new interaction techniques, like the Embodied User Interfaces defined in <ref type="bibr" target="#b3">[4]</ref>. For CAS systems, a small screen is an innovative device.</p><p>Beyond standard technical features of an LCD screen like size, weight, resolution, frame rate, number of colors, luminance, viewing angle, and thickness, we propose a design space based on more interaction-centered characteristics, that are inspired from our situation design space. As shown in Figure <ref type="figure" target="#fig_4">4</ref>, our framework is comprised of four dimensions, namely Input, Output, Manipulation and DOF.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Input</head><p>The Input dimension is used to characterize how the screen is used by the user to convey information to the computer system. Five values are identified along this dimension: none, tactile, pressure, acceleration, localization. The value none means that the screen is not used as part of an input modality. Tactile is the common input modality with a PDA (tactile screen). Moreover sensors can be embedded within the device. Thus pressure or acceleration can be detected as in <ref type="bibr" target="#b3">[4]</ref>. Finally the localization of the screen can be known by the computer system thanks to a tracking mechanism.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Output</head><p>The Output dimension is used to describe how the device conveys information to the user. We focus here on visual data but other non visual interaction languages can be used, including haptic feedback. Along this dimension, two values are identified showing whether the displayed data are dependent on the screen's position or not. For instance, if the screen is tied to a tool handled by the surgeon, and it conveys guidance information, then the output data may be dependent on the screen's position: the displayed data change according to the screen's positions over the patient's body. Other kind of data (e.g., blood pressure, body temperature) may be independent on the screen's position in that same case. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Manipulation</head><p>The Manipulation dimension expresses the context of use of the screen. Two values, direct and indirect, are identified along this dimension. The manipulation is direct if the user holds the device. The manipulation is indirect when the device is bound to another entity (e.g., an automatic arm), which itself is manipulated by the user.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Degree of Freedom (DOF)</head><p>The DOF dimension is used to describe the number of different ways in which the screen can move. The screen can be stationary, move only in translation or in rotation, or accept free motions. Those values are always based on a referential. For instance, if a screen is tied to a surgical tool (e.g., a drill), the screen is stationary in the tool's referential, but freely mobile in a more global referential: Its position and orientation are therefore tool-dependent. That referential is often defined thanks to the context of use (Manipulation).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Two CAS systems involving a mini screen</head><p>We present two usages of a mini screen that we designed and are currently developing. They correspond to different interaction situations as well as characterizations within our mini screen design space.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Guidance system</head><p>An immediate usage of the mini screen consists of using it as an output adapter to display guidance information. We therefore obtain the same situation as in CASPER presented in Figure <ref type="figure" target="#fig_2">3</ref> (Class III-input and Class I-output).</p><p>While concretizing the interaction, we decided that the mini screen will be tied directly to the tool (e.g., a drill) or a tool guide if the tool itself is too fragile (e.g., a needle).</p><p>The ASUR description of the concrete interaction therefore includes: (Aout=Rtool). Within the mini screen design space, this design decision is described by (none, screen position independent data, indirect, stationary) as shown in Figure <ref type="figure" target="#fig_4">4</ref>. Taking this design decision was driven by the need to reduce the perceptual discontinuity as defined in <ref type="bibr" target="#b1">[2]</ref> and experimentally observed in CASPER. Linking the screen and the tool may indeed reduce the perceptual discontinuity.</p><p>As a CAS system to integrate our prototype, we have chosen puncture applications, either pericardial or renal. Guidance information in these systems is limited and easy to represent (tool direction, tool orientation, and tool depth). Another possible usage of a mini screen consists of not reducing it to an output adapter only, as in the previous system, but allowing it to be manipulated as a tool by the surgeon. The mini screen can then be used as a magnifying glass or "magical lens" on top of the patient's body. Our design is inspired from the Image Overlay system <ref type="bibr" target="#b0">[1]</ref> presented in Figure <ref type="figure" target="#fig_6">5</ref>. As opposed to the interaction situation of the Image Overlay system where the surface is an output adapter (Aout), the mini screen in our system is both an Aout and a Rtool. Indeed the surgeon is no longer manipulating surgical tools but the mini screen. The interaction situation therefore belongs to Class III-output as opposed to Class IV-output for the Image Overlay system. For inputs, the interaction situation corresponds to the same one as in CASPER. The mini screen as a tool (Rtool) is localized by an input adapter.</p><p>While concretizing the interaction, the same localizer (Ain) can be used for both the patient and the mini screen (as in CASPER). We fixed the value "translation on top of the patient's body" along the dimension DOF, as shown in Figure <ref type="figure" target="#fig_4">4</ref>. The ASUR description of the concrete interaction therefore includes: (Rtool fiRobject). </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>CONCLUSION</head><p>In this paper we presented two design spaces for MR systems.</p><p>• The interaction situation design space is useful for the designer in order to systematically explore the set of possibilities at an early stage of the interaction design, without being biased by a particular technology.</p><p>• The mini screen design space helps the transition between the abstract and concrete interaction by characterizing possible usages of one particular innovative interaction device for CAS systems: a mini screen.</p><p>As on-going work, we are studying the interaction situations of other types of MR systems and not only the ones that assist a user in performing a task on a physical object as in CAS systems.</p><p>During the workshop we would like to discuss the completeness of the mini screen design space and apply our interaction situation design space for describing the interaction situations of the presented systems.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Fig. 1 :</head><label>1</label><figDesc>Fig. 1: CASPER in use during the intervention.In Figure2, we illustrate this level of interaction description, by presenting the ASUR diagram of the CASPER system. During the surgery, CASPER assists the surgeon (U) by providing in real time the position of the puncture needle (Rtool) according to the planned trajectory. Two adapters (Ain, Aout) are necessary: The first one (Aout) is the screen for displaying guidance to the surgeon, and the second one (Ain) is dedicated to tracking the needle position and orientation as well as the patient's body (Robject). The localization of the needle is possible within a predefined volume near the patient's body. Such a constraint is represented in Figure2by an ASUR relation fi (physical activity triggering an action).</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Fig. 2 :</head><label>2</label><figDesc>Fig. 2: ASUR diagram of the concrete interaction in CASPER. For a complete ASUR description, the diagram is completed by the characteristics of each entity and relation (see [2]).</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Fig. 3 :</head><label>3</label><figDesc>Fig. 3: ASUR description of the abstract interaction in CASPER. As opposed to Figure 2, the interaction modalities as well as the physical relationships are not yet defined at this stage of the design.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head></head><label></label><figDesc>Class III-input-a (U and Robject distant): UAE(AinAESAEAout)AE[Rtool, Robject]AEAinAES Class III-input-b (U and Robject distant): UAE(RtoolAEAinAESAEAout)AE[Rtool, Robject]AEAinAES Class IV-input-a (U and Robject distant): UAE(AinAESAEAout)AERobjectAEAinAES Class IV-input-b (U and Robject distant):</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Fig. 4 :</head><label>4</label><figDesc>Fig. 4: Mini screen design space.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>Fig. 5 :</head><label>5</label><figDesc>Fig. 5 : Image Overlay system.</figDesc><graphic coords="7,70.00,315.00,209.00,209.00" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 : Characteristics of ASUR entities and relationships. Two levels of abstraction in describing interaction using ASUR</head><label>1</label><figDesc></figDesc><table><row><cell>(R characteristics</cell><cell>and</cell><cell>A)</cell><cell>Relationships characteristics</cell></row><row><cell cols="3">-Perceptual/Action sense and location</cell><cell>-Interaction language</cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>ACKNOWLEDGMENTS</head><p>This work is supported by the French Minister of Research under contract MMM. Special thanks to C. Marmignon for the CASPER picture and to G. Serghiou for reviewing the paper.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">An Image Overlay System for Medical Data Visualization</title>
		<author>
			<persName><forename type="first">M</forename><surname>Blackwell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Nikou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">M</forename><surname>Digiola</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Kanade</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of MICCAI&apos;98</title>
				<meeting>MICCAI&apos;98</meeting>
		<imprint>
			<publisher>Springer-Verlag</publisher>
			<date type="published" when="1998">1998</date>
			<biblScope unit="volume">1496</biblScope>
			<biblScope unit="page" from="232" to="240" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Consistency in Augmented Reality Systems</title>
		<author>
			<persName><forename type="first">E</forename><surname>Dubois</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Nigay</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Troccaz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of EHCI&apos;01</title>
		<title level="s">LNCS</title>
		<meeting>EHCI&apos;01</meeting>
		<imprint>
			<publisher>Springer-Verlag</publisher>
			<date type="published" when="2001">2001</date>
			<biblScope unit="volume">2254</biblScope>
			<biblScope unit="page" from="117" to="130" />
		</imprint>
	</monogr>
	<note>IFIP WG2.7 (13</note>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">ASUR++: a Design Notation for Mobile Mixed Systems</title>
		<author>
			<persName><forename type="first">E</forename><surname>Dubois</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Gray</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Nigay</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Interacting With Computers</title>
		<imprint>
			<biblScope unit="volume">15</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="497" to="520" />
			<date type="published" when="2003">2003</date>
			<publisher>Elsevier Science</publisher>
		</imprint>
	</monogr>
	<note>IWC</note>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Embodied User Interfaces : Towards Invisible User Interfaces</title>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">P</forename><surname>Fishkin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">P</forename><surname>Moran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">L</forename><surname>Harrison</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of EHCI&apos;98</title>
				<meeting>EHCI&apos;98</meeting>
		<imprint>
			<publisher>Kluwer academic</publisher>
			<date type="published" when="1998">1998</date>
			<biblScope unit="page" from="1" to="18" />
		</imprint>
	</monogr>
	<note>IFIP WG2.7 (13</note>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Mobile Telepresence Surgery</title>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">S</forename><surname>Green</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">F</forename><surname>Jensen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">W</forename><surname>Hill</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Shah</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Symposium on Medical Robotics and Computer Assisted Surgery</title>
				<meeting><address><addrLine>New-York</addrLine></address></meeting>
		<imprint>
			<publisher>Wiley</publisher>
			<date type="published" when="1995">1995</date>
			<biblScope unit="page" from="98" to="103" />
		</imprint>
	</monogr>
	<note>Proceedings of MRCAS&apos;95</note>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Tangible Bits: Towards Seamless Interfaces between People, Bits and Atoms</title>
		<author>
			<persName><forename type="first">H</forename><surname>Ishii</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Ullmer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of CHI&apos;97</title>
				<meeting>CHI&apos;97</meeting>
		<imprint>
			<publisher>ACM Press</publisher>
			<date type="published" when="1997">1997</date>
			<biblScope unit="page" from="234" to="241" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">A Generic Platform for Addressing the Multimodal Challenge</title>
		<author>
			<persName><forename type="first">L</forename><surname>Nigay</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Coutaz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of CHI&apos;95</title>
				<meeting>CHI&apos;95</meeting>
		<imprint>
			<publisher>ACM Press</publisher>
			<date type="published" when="1995">1995</date>
			<biblScope unit="page" from="98" to="105" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Semi-Active Guiding Systems in Surgery: A Two-DOF Prototype of the Passive Arm with Dynamic Constraints</title>
		<author>
			<persName><forename type="first">J</forename><surname>Troccaz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Delnondedieu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Mechatronics</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="399" to="421" />
			<date type="published" when="1996">1996</date>
			<publisher>Elsevier Science</publisher>
		</imprint>
	</monogr>
	<note>PADyC)</note>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Perceptual user Interfaces</title>
		<author>
			<persName><forename type="first">M</forename><surname>Turk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Robertson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Communications of the ACM</title>
				<imprint>
			<publisher>ACM Press</publisher>
			<date type="published" when="2000">2000</date>
			<biblScope unit="volume">43</biblScope>
			<biblScope unit="page" from="32" to="70" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">First steps Towards Handheld Augmented Reality</title>
		<author>
			<persName><forename type="first">D</forename><surname>Wagner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Schmalstieg</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE International Symposium on Wearable Computers</title>
				<imprint>
			<publisher>IEEE Computer Society</publisher>
			<date type="published" when="2003">2003</date>
		</imprint>
	</monogr>
	<note>Proceedings of ISWC03</note>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
