<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">&apos;HVLJQLQJ ,QWHUDFWLRQ 6SDFH IRU 0L[HG 5HDOLW\ 6\VWHPV</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<affiliation key="aff0">
								<orgName type="institution">Catholic University of Louvain</orgName>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff1">
								<orgName type="department" key="dep1">Information Systems Unit</orgName>
								<orgName type="department" key="dep2">School of Management</orgName>
								<address>
									<addrLine>Place des Doyens, 1</addrLine>
									<postCode>+32 10 478525</postCode>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff2">
								<orgName type="laboratory">Communications and Remote Sensing Lab. Bâtiment Stévin</orgName>
								<address>
									<postCode>+32 10 478555</postCode>
									<settlement>Place du Levant</settlement>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">&apos;HVLJQLQJ ,QWHUDFWLRQ 6SDFH IRU 0L[HG 5HDOLW\ 6\VWHPV</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">E887526C5AC03F868319D60E08154096</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T22:15+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>User-interface design</term>
					<term>mixed interaction space</term>
					<term>spatial integration</term>
					<term>temporal integration</term>
					<term>mixed reality systems</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>A mixed scenario involves a lot of objects which may be related in various ways. These relations may lead to inconsistencies similar to those related to continuous interaction. We propose here a model for the declarative representation of the design aspects involved in a MIS (Mixed Interaction Space).</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Mixed Reality (MR) is the state-of-the-art technology that merges the real and virtual worlds seamlessly in real time. It draws attention as a new technology of human interface, which surpasses the border that the conventional virtual reality has.</p><p>In view of the multidisciplinary integration and associated complexity existing in MR systems, the reality paradigm given by <ref type="bibr" target="#b4">[4]</ref>, proposes taxonomy where Real Environments (RE) and Virtual Environments (VE) are, in fact, two poles of the Reality-Virtuality Continuum, RE being the left pole and VE, the right pole. Mixed Reality includes the continuum transitions from RE, to Augmented Reality (AR), passing through Augmented Virtuality (AV) and towards VE, but excludes the end-points, perceived as limit conditions. In both AV, in which real objects are added to virtual ones, and VE (or virtual reality), the surround environment is virtual while in AR the surround environment is real.</p><p>The user's interaction with this Reality-Virtuality Continuum can be augmented by tangible interface. According to <ref type="bibr" target="#b6">[6]</ref> and <ref type="bibr" target="#b8">[8]</ref> tangible interfaces are those in which each virtual object is registered to a (tangible) physical object and the user interacts with a virtual object by manipulating the corresponding tangible (physical) object.</p><p>The development and implementation of such systems becomes very complex and support guidance during design of conventional interfaces is not anymore valid for modeling this class of systems.</p><p>By definition <ref type="bibr" target="#b11">[11]</ref> an interaction space may entail the representation of the visual, haptic and auditory elements that a user interface offers to its users. The interaction space for mixed reality systems should deal with elements which come from real and virtual world. It entails the design of a mixed interaction space.</p><p>For addressing these questions we present here a model for the declarative representation of design aspects involved in the MIS (Mixed Interaction Space) design. The design aspects of MIS are related to the spatial and temporal relationships between objects, user's interaction focus and insertion context of interaction spaces. They can facilitate or prevent the task goals from being attained, limiting interaction performance. Then an interaction space supporting these design characteristics could be very useful to guarantee a seamless interaction in the MR system.</p><p>The interaction space description is based on the presentation model definition given in <ref type="bibr" target="#b11">[11]</ref> and the model language is based on the spatio-temporal composition model given in <ref type="bibr" target="#b12">[12]</ref>.</p><p>As example of how uses the approach for designing Mixed Interaction Space we will take account the Image-guided surgery (IGS) interaction space scenario. In such systems complex surgical procedure can be navigated visually with great precision by overlaying on an image of the patient a color coded preoperative plan specifying details such as the locations of incisions, areas to be avoided and the diseased tissue. It is a typical application of augmented reality (AR) systems where the virtual world corresponding to the preoperative information should be correctly aligned in real time with the real world corresponding to the intraoperative information. This study case was thoroughly discussed by the authors in <ref type="bibr" target="#b9">[9]</ref>.</p><p>Note that the terms 'digital' and 'virtual' are used in this work in the sense of not physical or real.</p><p>The terms "real" and "physical" are used in the sense of not digital or virtual.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>¡¢£¤¥¦¢ §¡ ¨©¥¦£</head><p>An Interaction Space (IS) is assumed to be the complete presentation environment required for carrying out a particular interactive task. And it requires very often to deal with questions such as whether particular objects or scenes being displayed are real or virtual, whether images of scanned data should be considered real or virtual, whether a real object must look UHDOLVWLF whereas a virtual one need not to, etc. For example, in some AR systems there is little difficulty in labeling the remotely viewed video scene as UHDO and the computer generated images as YLUWXDO. If we compare this instance, furthermore, to a MR system in which one must reach into a computer generated scene with one's own hand and "grab" an object, there is also no doubt, in this case, that the object being grabbed is "virtual" and the hand is "real". Nevertheless, in comparing these two examples, it is clear that the reality of one's own hand and the reality of a video image are quite different, suggesting that a decision must be made about whether using the identical term UHDO for both cases is indeed appropriate.</p><p>In this work we adopt the distinction given by <ref type="bibr" target="#b4">[4]</ref> where real objects are any objects that have an actual objective existence and virtual objects are objects that exist in essence or effect, but not formally or actually. In order for a real object to be viewed, it can either be observed directly or it can be sampled and then resynthesised via some display devices. In order for a virtual object to be viewed, it must be simulated, since in essence it does not exist. This entails the use of some sort of a description or a model of the object. Now we can say that an interaction space is composed of:</p><p>x Real Interaction Space <ref type="bibr" target="#b5">(5,</ref><ref type="bibr" target="#b6">6)</ref>: if and only if it is composed of real components, e.g. real concrete interaction objects such as physical objects.</p><p>x Virtual Interaction Space (9,6): if and only if it is composed of virtual concrete interaction objects;</p><p>x Mixed Interaction Space (0,6): if and only if it is composed of virtual concrete interaction objects added to the real environment e.g. combined with real concrete interaction objects.</p><p>Each 0,6 is composed of a 9LUWXDO ,QWHUDFWLRQ 6SDFH 9,6 and of a 5HDO ,QWHUDFWLRQ 6SDFH 5,6, which are supposed to be physically constrained by the XVHUV ZRUNVSDFH and which may be all displayed on the workspace simultaneously.</p><p>Each ZRUNVSDFH is composed of at least one ,QWHUDFWLRQ 6SDFH (,6) called the basic IS, from which it is possible to derive the other IS (Figure <ref type="figure">1</ref>). This configuration becomes needed once the user can manipulate objects in the virtual world through the VIS or the user can manipulate objects in the real world through the RIS. &amp;RQFUHWH ,QWHUDFWLRQ 2EMHFW (CIO): this is an object belonging to the Interaction Space that any user can see with the appropriate artefacts (e.g. See-through head mounted display). We have two types of CIO, real and virtual. The Real Concrete Interaction Object is part of the RIS (e.g., live video, some physical objects like a pen, a needle, which can have a representation in the virtual world and so it will become a virtual concrete interaction object (Figure <ref type="figure">1</ref>). abstraction of all CIOs from both presentation and behavioral viewpoints that is independent of any given computing platform. By definition, an AIO does not have any graphical appearance, but each AIO is connected to 0, 1 or many CIOs having different names and presentations in various computing platforms.</p><p>Figure <ref type="figure">1</ref>. Representation of interaction spaces for mixed reality systems.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>¦ §© §¨ ¡ ¡¢£¤¥¦¢ §¡ ¨©¥¦£¨ §¤ ¤ ¨¨¢£R</head><p>egarding the vast possibilities to compose, to interact and to insert the Interaction Space into the environment we may to take account the follow design aspects which are described in Figure <ref type="figure" target="#fig_0">2:</ref> x spatial integration; x temporal integration;</p><p>x insertion context;</p><p>x user's interaction focus. ¨ ¡ ¢£ ¡¤ £¥¢¦ §¨¡¢£ © ¥ © The interaction space may involve a large number of media objects which should be integrated into the MIU (Mixed Interaction Unit). This integration concerns the spatial ordering and topological features between Concrete Interaction Objects (e.g. all participating visual media objects).</p><p>Then in the context of an AR application, a designer would like to place spatial objects (text, images, videos, animation, etc.) in the Interaction space in such a way that their relationships are clearly defined in a declarative way, i.e., "text A is placed at the location (100,100), text B appears 8 cm to the right and 12 cm below the upper side of A".</p><p>As related by <ref type="bibr" target="#b12">[12]</ref> spatial composition between two objects aims at representing three aspects:</p><p>x the topological relationships between the objects (disjoint, meet, overlap, etc.). For 3D objects relationships we must also consider here if the object is placed in front of, inside or behind the other object <ref type="bibr" target="#b1">[2]</ref>; x the directional relationships between the objects (left, right, above, above-left, etc.); x the distance/metric relationships between the objects (outside 5 cm, inside 2 cm, etc.).</p><p>A N-dimensional projection relation is a N-tuple of 1D</p><p>relations, e.g. 5 = (5,5 ). Each 1D relation corresponds to the relationship between the N-dimensional objects in one of the dimensions. So if V is the number of possible 1D relations at a particular resolution, the number of ND relations that can be defined at the same resolution is V .</p><p>According to the requirements of the particular application, not all dimensions need to be tuned at the same resolution, in which case the maximum number of ND relations is the product of the corresponding numbers for each dimension. Figure <ref type="figure" target="#fig_1">3</ref> illustrates the 169 ( <ref type="formula">13</ref>2 ) primitive projection relations between regions on the plane, on the initially discussed (Allen's) resolution scheme. All previous properties can be analogously extended to N dimensions.</p><p>So, given a N-dimensional relation, the corresponding spatial configuration can be easily inferred by combining all the 1D configurational inferences. The complete description of this approach can be found in <ref type="bibr" target="#b1">[2]</ref>.</p><p>To specify the spatial integration we propose to use the generalized methodology for representing the distance between two spatial objects, given in <ref type="bibr" target="#b12">[12]</ref>. Then we assume that spatial objects are rectangles and more complex objects can also be represented as rectangles by using their minimum bounding rectangle (MBR) approximation. The same could be done with minimum bounding cube for 3D objects.</p><p>The distance will be expressed in terms of distance between the FORVHVW YHUWLFHV. For each spatial object O, we label its vertices as 2YL L starting from the bottom left vertex in a clockwise manner. As FORVHVW, we define the pair of vertices ($YL%YM ) with the minimum Euclidean distance. The designer of a mixed interaction space must be able to express spatial composition predicates in an unlimited manner. For instance (see Figure <ref type="figure" target="#fig_2">4</ref>), the designer could describe the appearing composition as: "REMHFW % WR DSSHDU FP ORZHU WKDQ WKH XSSHU VLGH RI REMHFW $ DQG FP WR WKH ULJKW".</p><p>So, assuming two spatial objects $ %, we define the generalized spatial relationship between these objects as: 6SDWLDOBLQWHJUDWLRQ 5LM YL YM [ \ where 5LM is the identifier of the topological-directional relationship between $ and % (derived from <ref type="bibr" target="#b1">[2]</ref>), YL YM are the closest vertices of $ and %, respectively, and [ \ are the horizontal and vertical distances between YL YM .</p><p>The example below illustrates these features.  ¡¢£¤ ¥ ¦ § ¨© ¢ ¥ ¢£¡ ¥ £¤¢ ¥ ¤ ¡ ¥ ¤ ¡ ¥ . The real scenario of this description is illustrated in Figure <ref type="figure">5</ref> and the spatial composition (interaction space layout) of the above scenario is illustrated in Figure <ref type="figure" target="#fig_3">6</ref>, while the temporal one will be discussed in the next sub-section.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>³7KH ,*6 LQWHUDFWLRQ VSDFH VWDUWV ZLWK EDFNJURXQG SUHVHQWDWLRQ RI D OLYH YLGHR LPDJH $ ORFDWHG DW SRLQW UHODWLYH WR WKH DSSOLFDWLRQ RULJLQ $W WKH VDPH WLPH D SDWK OLQH JUDSKLF % LV RYHUODSSHG WR LPDJH $ DFFRUGLQJ WR WKH UHJLVWUDWLRQ SURFHGXUHV ,Q D WLPH W GHWHUPLQDWH E\ WUDFNLQJ V\VWHP SURFHGXUHV ODWHU WKH 0HQXBRSWLRQV FRQWDLQLQJ WKH WH[WV &amp; ' DQG ( LV GLVSOD\HG LQWR LQWHUDFWLRQ VSDFH 7KH REMHFW &amp; DSSHDUV SDUWLDOO\ RYHUODSSLQJ WKH ULJKW VLGH RI REMHFW % FP ORZHU WKDQ WKH XSSHU VLGH RI REMHFW % DQG ±FP WR WKH ULJKW RI % 7KH REMHFW ' DSSHDUV FP LQ WKH ERWWRPULJKW DQG FP WR WKH ULJKW VLGH RI &amp; 7KH REMHFW ( DSSHDUV FP ORZHU WKDQ WKH ERWWRP VLGH RI REMHFW ' DQG OHVV FP WR WKH OHIW RI '</head><note type="other">require is displayed</note><p>The directional and relational relationships between the objects in many applications of augmented reality result from the registration procedures to mix in the correct way real and digital worlds. For instance, the AR systems which are based on markers recognition in order to relate the real and virtual worlds (such as those using ARToolKit<ref type="foot" target="#foot_0">1</ref> library), assume that the marker is in x-y plane, and z axis is pointing downwards from the marker plane. So, vertex positions can be represented in 2D coordinates by ignoring the z axis information and then the virtual object can be placed in a (x, y, z) position related to the center of the marker.</p><p>These spatial aspects can be defined by:</p><p>1. designer (while design time),</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">by user</head><p>3. or by the system (while the application progresses).</p><p>This classification will be used as a VSDWLDOBFRQWUROB,' parameter in the composition of mixed interaction spaces.</p><p>The spatial integration of objects into the interaction space is a relevant aspect since that information facilitates processing through efficient allocation of attentional resources. For instance an adequate spatial integration of the objects can facilitate the user's interpretation.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>¢¦ © ¨¡¤ £¥¢¦ §¨¡¢£ © ¥</head><p>Besides the spatial aspects related to the integration of CIO into MIU we should also consider the temporal aspects that involve all participating media objects (e.g. visual and sound).</p><p>As mentioned in <ref type="bibr" target="#b0">[1]</ref>  </p><formula xml:id="formula_0">¡ ¢£ ¤ ¥¦ £ § 1 ¡ ¨¤ © £ ¦ § 8 ¡ ££ © § 2 § ££ © ¡ 9 3 10 ! "#$%#"&amp; 4 ! "#$%#"&amp; 11 ! '(%)01&amp; 5</formula><p>&amp; '(%)01 ! 12 2345467869 6 9345467862 13 2 8@ A9 7</p><p>The interaction space objects synchronization is defined according to the task requirement. Another aspect is that we can have different types of control. For instance a virtual object can be displayed automatically in the interaction space when a determined object is recognized in the real world or it can be done under user's demand. Then the temporal control integration can be defined by: 1. User (e.g. during execution time) 2. System (e.g. during execution time) 3. Third part (e.g. defined by an agent system which is capable of making decisions and initiating actions during execution time independently) This classification will be used as a WHPSRUDOBFRQWUROB,' parameter in the composition of mixed interaction spaces.</p><p>Figure <ref type="figure" target="#fig_8">7</ref> shows the temporal synchronization diagram related to the spatial diagram illustrated in Figure <ref type="figure" target="#fig_3">6</ref>. The text objects C, D and E appear automatically according to the tracking system and they disappear according to user's interaction.</p><p>Figure <ref type="figure" target="#fig_8">7</ref>. Temporal composition of the Image-guided surgery example given in Figure <ref type="figure" target="#fig_3">6</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>BC DEF GH I C P I CG EQ G I R S ET HP ED U CS HCG EFU PGH I C DVU P ED</head><p>Besides spatial and temporal integration of interaction space objects it is important to understand how the insertion of devices and interaction spaces in the environment can contribute to a better interaction.</p><p>According to the user's focus while performing a task we have identified four spatial zones for an insertion device considering the level of periphery (see Figure <ref type="figure" target="#fig_5">8</ref>):</p><p>1. Central zone: it corresponds to a device insertion distance of 0 to 45cm from the user's task focus.</p><p>2. Personal zone: it corresponds to a device insertion distance of 46cm to 1.2m from the user's task focus.  If the device is inserted in the central zone of the user's task, s/he does not need to change her/his attention focus to perform the task. Otherwise if the user is changing the attention focus all time, then in this case it is probable that the device is inserted outside from the central zone and so in a peripheral context of use (Figure <ref type="figure" target="#fig_6">9</ref>).</p><p>In the Museum project, one application of NaviCam system <ref type="bibr" target="#b7">[7]</ref>, the device is inserted in the central context of the user's tasks, therefore she doesn't need to change her attention focus to perform the task. Otherwise if the information is displayed in a screen in the museum room and the user needs to look at the screen and after that look at the painter and so s/he changes her/his attention focus all the time, then in this case the device is inserted in peripheral context. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>DEF¡D HCG EFU PGH I C R I P¢ D</head><p>When there are multiple sources of information and two worlds of interaction (real and virtual) we must choose what to attend to and when. At times, we need to our attention exclusively on a single item without interference from other items. At other times, we may need to timeshare or divide our attention between two (or more) items of interest, which can be part of the same or a different world.</p><p>For example in the Museum project <ref type="bibr" target="#b7">[7]</ref> the user wears a see-through head-mounted display in which information about an exhibit is displayed. The user is thus able to perceive real objects (the exhibit) and added synthetic information. The object of the task here is the painting of the exhibit. Therefore, the user's interaction focus is shared between virtual and real objects.</p><p>Following the definition given by <ref type="bibr" target="#b3">[3]</ref> the user is performing a task in order to manipulate or modify an object of the real world, and then the task focus is on the real world; or an object of the virtual world whose task focus is on the virtual world.</p><p>Therefore, by considering all possibilities of interaction focus while the user is performing a specific task, we have found five possible combinations: Interaction focus on Real World without shared attention (RW): in this case the interaction is focused on only one item in the real world. There are no real items competing for user's attention.</p><p>Interaction focus on Virtual World without shared attention (VW): in this case the interaction is focused on only one item in the virtual world. There are no virtual items competing for user's attention. Interaction focus Shared in the Real World (intra-world interaction focus, SRW): in this case the interaction focus is shared between items in the real world. Interaction focus Shared in the Virtual World (intraworld interaction focus, SVW): in this case the interaction focus is shared between items in the virtual world.</p><p>Interaction focus Shared between Worlds (inter-world interaction focus, SW): in this case the interaction focus is shared between items belong to different worlds (real and virtual).</p><p>The five possible interaction focus types discussed here will be used as ,QWHUDFWLRQB)RFXVB,' parameter in the composition tuple of mixed interaction spaces.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>£ ¤£¥¦ §¨©¥¦¤BB ¥B¨ B¨£©B ¤¦£©¨¤</head><p>This declarative definition should be transformed into an internal representation that captures the topological, directional, temporal relationships as well user's interaction focus and insertion context of IS. Here we propose a definition model to support these needs.</p><p>Then the composition of a mixed interaction space consists of several LQGHSHQGHQW fundamental compositions.</p><p>The term LQGHSHQGHQW implies that objects participating in these compositions are not related implicitly (either spatially, or temporally, or by interaction focus or insertion context), except for their implicit relationship at the start point .</p><p>Thus, all compositions are explicitly related to . We call these compositions FRPSRVLWLRQ WXSOHV, and these include spatially and/or temporally related objects.</p><formula xml:id="formula_1">MIS composition = {[6SDWLDOB,QWHJUDWLRQ], [7HPSRUDOBLQWHJUDWLRQ], [,QWHUDFWLRQB)RFXV], [,QVHUWLRQB&amp;RQWH[W]}</formula><p>Where: 6SDWLDOB,QWHJUDWLRQ contains the following optional parameters: &gt;6SDWLDOB,QWHJUDWLRQ@ UHODWLRQBW\SHB,' 9 9 [ \ VSDWLDOBFRQWUROB,' 5HODWLRQB7\SHB,' is given by one of the possible relationships presented in <ref type="bibr" target="#b1">[2]</ref>, which also explores the possibility to extend them to 3D relationships. 6SDWLDOBFRQWUROB,' represents who has the spatial control:</p><p>designer, user or system, respectively. The 7HPSRUDOB,QWHJUDWLRQ can have the following optional parameters: &gt;7HPSRUDOB,QWHJUDWLRQ@ UHODWLRQBW\SHB,' WHPSRUDOBFRQWUROB,' 5HODWLRQBW\SHB,' is given by one of the Allen's relations ID represented in Table <ref type="table" target="#tab_1">1</ref>. 7HPSRUDOBFRQWUROB,' represents who has the temporal control: user, systems or third part, respectively.</p><p>The ,QWHUDFWLRQB)RFXV and ,QVHUWLRQB&amp;RQWH[W don't have sub-parameters, then: ,QWHUDFWLRQB)RFXV corresponds to the user's interaction focus parameter during an interaction. This parameter is defined for each composition and it can assume one of the 5 possible values discussed in the previous subsection;</p><p>,QVHUWLRQB&amp;RQWH[W corresponds to the insertion context of the interaction space into the environment. This is a parameter defined only for the main interaction space composition. It can assume one of the 4 possible values discussed in the previous subsection.</p><p>The to be included in a composition tuple of a MIS are those that are spatially and/or temporally and/or focus shared related. In our example (Figure <ref type="figure" target="#fig_3">6</ref> with spatial integration description and In this example, we have a composition of MIS (mixed interaction space). It has to be stressed that, when the host MIS (i.e., IGS_interactionSpace) ends, all the MIS started by it are also stopped (i.e., Menu_options). There is an issue regarding the mapping of the spatio-temporal specifications into the composition tuples: the classification of involved objects. The proposed procedure is the following: For each object $L, we check whether it is related to objects already classified in an existing tuple. If the answer is positive, Ai is classified in the appropriate composition tuple (a procedure that possibly leads to reorganization of the tuples). Otherwise, a new composition tuple, composed by and $L, is created.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>¢GP I ED</head><p>During the application development process, it is probable (especially in the case of complex and large applications) that authors would need information related to these relationships. The related queries depending on the spatial, temporal, interaction focus and insertion context relationships maybe be classified in the following queries categories:</p><p>x pure spatial or temporal query: only a temporal or a spatial relationship is involved in the query. For instance, "which objects always overlap the presentation of live video A?", "which objects spatially lie above object B in the interaction space?". x spatio-temporal query: where such a relationship is involved. For instance, "which objects spatially overlap with object A during its presentation?". x MIS query: spatial or temporal layouts of the application considering interaction focus and insertion context. For instance, "what is the spatial integration (layout of MIS) when the user's interaction focus is shared between A and B?", "which objects are presented when the user's focus interaction is focused on the real world?", "when the user's focus is on the real world how is the insertion context of MIS?", "when the user has the temporal control of presentation where is located the user's interaction focus?"</p><p>The answers of such queries may indicate the potential problems during interaction such as discontinuous interaction. For instance if the user has the temporal control during an interaction and his interaction focus is under some object in the real world, so he/she probably will change between operation modes and attention focus to control, or to interact with the presentation. It characterizes a functional and perceptive discontinuity during interaction conforming discussed in <ref type="bibr" target="#b10">[10]</ref>. Queries like that can be automatically acquired during design time.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>¡¢ £¤¥¦¡¢</head><p>In this work we have reviewed and extended some approaches to design mixed interaction spaces. With that we have predictively modeled user interaction to evaluate design strategies and support adaptation for continuous interaction while dealing with mixed spaces of interaction.</p><p>As contributions of this work we have highlighted:</p><p>x Manage large number of options for the MIS design under development of MR systems. x Acquire spatial, temporal and focused layouts of the MIS under development of MR system for verification purposes such as those related to continuous interaction. x Help designers to envision future interactive mixed systems.</p><p>Finally we should be aware that specific design aspects such as spatial and temporal integration of different media objects have implications for the human perception. However the information that people assimilate from a modality of interaction (e.g., visual modality) also depends on their internal motivation, what they want to find and how well they know the domain. § ¨¢¡©£¢¥</p><p>We gratefully acknowledge the support from the Région Wallonne under contract WALEO 21/5129. The work described here is a part of the MERCATOR project available on http://www.tele.ucl.ac.be/PROJ/MERCATOR_MULTI_e.h tml ¢ ¥</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 2 .</head><label>2</label><figDesc>Figure 2.Designs aspects related to interaction space.</figDesc><graphic coords="3,52.29,136.55,234.72,119.28" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 3 .</head><label>3</label><figDesc>Figure 3.Relations between 2D regions adapted from [2].</figDesc><graphic coords="4,52.29,342.59,236.40,103.20" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 4 .</head><label>4</label><figDesc>Figure 4.Spatial relationships.</figDesc><graphic coords="4,89.85,477.35,159.72,183.96" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 6 .</head><label>6</label><figDesc>Figure 6. Spatial composition of the Image-guided interaction space.</figDesc><graphic coords="4,336.69,89.27,177.60,205.44" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>3 .</head><label>3</label><figDesc>Social zone: it corresponds to a device insertion distance of 1.3 to 3.6m from the user's task focus. 4. Public zone: it corresponds to a device insertion distance bigger than 3.6m from the user's task focus. The four possible insertion context type discussed here will be used as ,QVHUWLRQB&amp;RQWH[WB,' parameter in the composition tuple of mixed interaction spaces.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 8 .</head><label>8</label><figDesc>Figure 8. Zones of insertion context according to user's task focus. 1.Central zone; 2.Personal zone; 3.Social zone and 4.Public zone.</figDesc><graphic coords="5,347.49,522.59,190.92,126.72" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>Figure 9 .</head><label>9</label><figDesc>Figure 9. Example of insertion contexts regarding the user's task focus. Left picture shows insertion context of interaction spaces in Personal zone and the right picture shows insertion of interaction space in Central zone.</figDesc><graphic coords="6,52.29,223.91,105.12,84.12" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_7"><head>9 DQG 9</head><label>9</label><figDesc>are the closest vertices between two objects $ and %, respectively, and [ \ are the horizontal and vertical distances between YL YM.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_8"><head>Figure 7</head><label>7</label><figDesc>with related temporal integration description) A and B and C should be in the same composition tuple, since A relates to B and B relates to Menu_options. On the other hand, if an object is not related to any other object, neither spatially nor temporally, so it composes a different tuple. The above specifications defined in a high-level are transformed into the following model LPSRUWDQW WR VWUHVV WKDW in composition tuple c3 represents the spatio-temporal origin of the Menu_options.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head></head><label></label><figDesc>The Virtual CIO is a part of the VIS (e.g. text, image, animation, push button, a list box). The virtual CIO can also entail the virtual representation of the real CIO. A CIO is said to be VLPSOH if it cannot be decomposed into smaller CIOs. A CIO is said to be FRPSRVLWH if it can be decomposed into smaller units. Two categories are distin-</figDesc><table /><note>guished: SUHVHQWDWLRQ &amp;,2, which is any static CIO allowing no user interaction, and FRQWURO &amp;,2, which support some interaction or user interface control by the user. Both, presentation and control CIOs can be part of the RIS and/or VIS. $EVWUDFW ,QWHUDFWLRQ 2EMHFW (AIO): this consists of an</note></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 1 .</head><label>1</label><figDesc>Seven Allen's relation and their inverse.</figDesc><table><row><cell>Relation ID</cell><cell>Relation ID</cell></row></table><note>synchronization can be represented by thirteen possible temporal relationships considering the operation inverse for each relationship except for the equal relation. Basically there are two types of temporal synchronization: sequential (EHIRUH relation) and simultaneous (that can be HTXDO, PHHWV, RYHUODSV, GXULQJ, VWDUWV, or ILQLVKHV relations). Note from table 1 that all simultaneous relationships (such as RYHUODSV, GXULQJ, VWDUWV, and ILQLVKHV can be generalized as the HTXDO relation by inserting some delay time when it is needed. For example in the [ EHIRUH \ relation there is a time space better than zero between [ and \ and at the [ PHHWV \ relation the space-time is zero between [ and \.</note></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">More information about ARToolKit can be found at http://www.hitl.washington.edu/research/shared_space/dow nload/</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Maintaining knowledge about temporal intervals</title>
		<author>
			<persName><forename type="first">James</forename><forename type="middle">F</forename><surname>Allen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Communications of the ACM</title>
		<imprint>
			<biblScope unit="volume">26</biblScope>
			<biblScope unit="issue">11</biblScope>
			<biblScope unit="page" from="832" to="843" />
			<date type="published" when="1983-11">November 1983</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Querying Multimedia Documents By Spatiotemporal Structure</title>
		<author>
			<persName><forename type="first">V</forename><surname>Delis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Papadias</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the ,QWHUQDWLRQDO &amp;RQIHUHQFH RQ )</title>
				<meeting>the ,QWHUQDWLRQDO &amp;RQIHUHQFH RQ )<address><addrLine>OH; LEOH 4XHU\</addrLine></address></meeting>
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<title level="m">$QVZHULQJ 6\VWHPV</title>
				<meeting><address><addrLine>Denmark</addrLine></address></meeting>
		<imprint>
			<publisher>Springer-Verlag LNCS</publisher>
			<date type="published" when="1998">1998</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Augmented Reality: Which Augmentation for Which Reality?</title>
		<author>
			<persName><forename type="first">E</forename><surname>Dubois</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Nigay</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Conference Proceedings of DARE2000</title>
				<editor>
			<persName><forename type="first">W</forename><surname>Mackay</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">E</forename></persName>
		</editor>
		<meeting><address><addrLine>Ellsinore -Denmark</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2000-04">2000. April 2000</date>
			<biblScope unit="page" from="165" to="167" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">A Taxonomy of Real and Virtual World Display Integration</title>
		<author>
			<persName><forename type="first">P</forename><surname>Milgran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">J</forename><surname>Herman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">0L[HG 5HDOLW\ 0HUJLQJ 5HDO DQG 9LUWXDO (QYLURQPHQWV</title>
				<imprint>
			<publisher>Ohmshda &amp; Springer-Verlag</publisher>
			<date type="published" when="1999">1999</date>
			<biblScope unit="page" from="5" to="30" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<author>
			<persName><forename type="first">L</forename><surname>Nigay</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Dubois</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Renevier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Pasqualetti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Troccaz</surname></persName>
		</author>
		<title level="m">Mixed Systems: Combining Physical and Digital Worlds, &amp;RQIHUHQFH SURFHHGLQJV RI +&amp;</title>
				<meeting><address><addrLine>QWHUQDWLRQDO, Crete -Greece</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2003">2003</date>
			<biblScope unit="page" from="1203" to="1207" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Developing a generic augmented reality interface</title>
		<author>
			<persName><forename type="first">I</forename><surname>Poupyrev</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">&amp;RPSXWHU</title>
		<imprint>
			<biblScope unit="volume">35</biblScope>
			<biblScope unit="issue">3</biblScope>
			<date type="published" when="2002">2002</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<title level="m" type="main">The World through the Computer: Computer Augmented Interaction with Real World Environments</title>
		<author>
			<persName><forename type="first">J</forename><surname>Rekimoto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Nagao</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1995">1995</date>
			<publisher>QWHUIDFH 6RIWZDUH DQG 7HFKQRORJ\</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Augmented Surfaces: A spatially Continuous Work Space for Hybrid Computing Environments</title>
		<author>
			<persName><forename type="first">J</forename><surname>Rekimoto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Saitoh</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">&amp;+</title>
		<imprint>
			<biblScope unit="page" from="15" to="20" />
			<date type="published" when="1999-05">May, 1999</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">³Modeling Interaction for Image-Guided Procedures</title>
		<author>
			<persName><forename type="first">D</forename><surname>Trevisan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Vanderdonckt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Macq</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Raftopoulos</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of International Conference on Medical Imaging SPIE2003</title>
				<editor>
			<persName><forename type="first">K</forename><surname>Hanson</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">C.-T</forename><surname>Chen</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">E</forename><forename type="middle">L</forename><surname>Siegel</surname></persName>
		</editor>
		<meeting>International Conference on Medical Imaging SPIE2003<address><addrLine>San Diego</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2003-02">February 2003. 2003</date>
			<biblScope unit="page" from="108" to="118" />
		</imprint>
	</monogr>
	<note>International Society for Optical Engineering</note>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Continuity as Usability Property</title>
		<author>
			<persName><forename type="first">D</forename><surname>Trevisan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Vanderdonckt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Macq</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of 10 th International Conference on Human-Computer Interaction HCI International&apos;2003</title>
				<editor>
			<persName><forename type="first">C</forename><surname>Stephanidis</surname></persName>
		</editor>
		<meeting>of 10 th International Conference on Human-Computer Interaction HCI International&apos;2003<address><addrLine>Heraklion; Mahwah</addrLine></address></meeting>
		<imprint>
			<publisher>Lawrence Erlbaum Associates</publisher>
			<date type="published" when="2003-06-27">22-27 June 2003. 2003</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Encapsulating Knowledge for Intelligent Interaction Objects Selection</title>
		<author>
			<persName><forename type="first">J</forename><surname>Vanderdonckt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Bodart</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">3URFHHGLQJV RI ,QWHU&amp;+</title>
				<meeting><address><addrLine>New York</addrLine></address></meeting>
		<imprint>
			<publisher>ACM Press</publisher>
			<date type="published" when="1993">1993</date>
			<biblScope unit="page" from="424" to="429" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Spatiotemporal composition and indexing for large multimedia applications</title>
		<author>
			<persName><forename type="first">M</forename><surname>Vazirgiannis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Theodoridis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Sellis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">0XOWLPHGLD 6\VWHPV n</title>
				<imprint>
			<publisher>Springer-Verlag</publisher>
			<date type="published" when="1998">1998</date>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="page" from="284" to="298" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">The Magic Board, an Augmented Reality Interactive Device Based on Computer Vision</title>
		<author>
			<persName><forename type="first">Leon</forename><surname>Watts</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Workshop on Continuity in Human Computer Interaction</title>
				<meeting><address><addrLine>Scheveningen, Netherlands</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2000-04-02">2000. April 2 and 3, 2000</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
