<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Decoupling of Modality Integration and Interaction Design for Multimodal Human-Robot Interfaces</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Mathieu</forename><surname>Vallée</surname></persName>
							<email>vallee@ict.tuwien.ac.at</email>
							<affiliation key="aff0">
								<orgName type="institution">Vienna University of Technology Institute of Computer Technology Vienna</orgName>
								<address>
									<country key="AT">Austria</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Dominik</forename><surname>Ertl</surname></persName>
							<email>ertl@ict.tuwien.ac.at</email>
							<affiliation key="aff0">
								<orgName type="institution">Vienna University of Technology Institute of Computer Technology Vienna</orgName>
								<address>
									<country key="AT">Austria</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Decoupling of Modality Integration and Interaction Design for Multimodal Human-Robot Interfaces</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">DC2ECD31BD94242AD0BA8A3AAAD58BC2</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T18:56+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract/>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>The development of a multimodal Human-Robot Interface (HRI) involving mixed-initiative and context-awareness is complex and laborious. The integration of individual modalities (e.g., gesture recognition or speech output) and the design of natural human-robot interaction are two different tasks that both require their own expertise.</p><p>In this paper, we consider three different roles that participate in the development of a multimodal User Interface (UI): the interaction designer, the modality integrator, and the multimodal UI designer. We present how the decoupling between these roles is facilitated by tools based on interaction modeling. We finally discuss how the decoupling is beneficial for introducing mixed-initiative and context-awareness in multimodal HRI.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Roles in Multimodal UI Design</head><p>Previous work shows that it is common to separate the task of UI generation from that of the application development <ref type="bibr" target="#b0">[1]</ref>. We expect that the development of a multimodal UI is laborious enough to be split up in different tasks as well. So, we identify three different roles for a stronger decoupling of the needed tasks in UI generation . Figure <ref type="figure" target="#fig_0">1</ref> depicts the roles and their interactions:</p><p>-The interaction designer defines the modality-independent interaction between a human and a robot. It is responsible for defining the UI behaviour. -The modality integrator provides a particular modality (or group of modalities), like speech input. This requires either adapting and configuring existing toolkits, or developing a new modality. Additionally, this role is responsible for defining the physical properties of the integrated modality. -The multimodal UI designer considers the interplay between the modalities. This role considers the physical properties of the several integrated modalities in order to realize the desired interaction.  In current practice and in existing systems mostly a single person (or group of persons) realizes the task of the three roles with no clear separation of responsibilities. Often, the design of the UI is directed either towards demonstrating a particular modality or towards a single interaction description. In the first case, the risk is to limit interaction to what is supported by this modality, without considering usability. In the second case, the risk is to highly tailor the multimodal UI to the intended interaction, so limiting robustness and reusability. As a result, it is still difficult to understand the respective advantages of modalities (when to use a given modality) as well as the generic patterns governing the interplay between modalities (how to combine modalities for better usability). Furthermore, the role of the multimodal interface designer is particularly difficult, since it requires a good understanding of both the interaction design and the physical properties of particular modalities.</p><p>We propose to use supportive tools for the design of multimodal UIs. In particular, there is a strong need for: (i) languages to express the desired interactions, (ii) languages to express the physical properties of individual modalities, (iii) and tools for facilitating the mapping between desired interaction descriptions and physical properties of individual modalities.</p><p>A potential approach is the platform presented in <ref type="bibr" target="#b1">[2]</ref> that uses a discourse model <ref type="bibr" target="#b2">[3]</ref> and a communication platform for semi-automatic multimodal UI generation.</p><p>First, an interaction designer models the desired interaction scenarios as formal discourses between a human and a computer. The interaction is defined on a modality-independent high-level and supports modeling of mixed-initiative as well. At the same time, e.g., a modality provider couples speech input to the platform. For example, a freely available speech toolkit like Julius 1 is manually integrated into the platform so that the platform can use the speech recognition functionality of the toolkit. Finally, the multimodal UI designer couples the dis-course and the available modalities. The discourse model provides basic units of communication that are derived from the speech act theory. These are the intention, like an Informing, and the so-called propositional content. The propositional content is comparable to the meaning of the verb and the object in a simple English sentence. An example for a propositional content is getName-OfUser. Both, the intention and the propositional content form a basic unit of communication. The multimodal UI designer defines the pairs of intentions and propositional contents that a given modality can express. For example, the designer decides if speech input supports an Informing-getNameOfUser or not. This is a mapping of modality specific representations (e.g., a speech grammar or a set of gesture recognition symbols) into a more generic representation based on communicative acts. So, the multimodal UI designer "programs" the platform, extended with individual modalities (plugins), in order to realize the desired interaction.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Introducing Mixed-Initiative and Context-Awareness</head><p>The decoupled roles facilitate the definition of more natural multimodal HRI. In particular, the decoupling enables each role to concentrate on its main task. While the modality integrator concentrates on issues related to an individual modality (e.g., the performance of a speech recognition engine), the interaction designer focuses on more natural interaction for users.</p><p>For example, the interaction designer has to consider mixed-initiative and context awareness when a semi-autonomous service robot performs a task jointly with humans in a real-world environment. Mixed-initiative allows both, the human and the robot, to initiate the interaction. However, this often requires a way for the robot to attract the attention of the user. Speech and movement of the robot serve this purpose well, while GUI suffers from limitations (it needs to be visible to the user). A robot that attracts the attention of its user by moving towards products in a supermarket is described in <ref type="bibr" target="#b3">[4]</ref>. With context-awareness, the robot takes its own context into account and the context of the user during the interaction. This is particularly useful when the robot has the capability to choose the right interaction modality depending on the context. Decoupling of the roles enables to delay the selection of a particular modality at runtime, so the robot can consider a specific context.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Discussion and Open Questions</head><p>The proposed approach has been studied successfully for the development of a multimodal UI of a semi-autonomous shopping robot with GUI, speech and gesture <ref type="bibr" target="#b4">[5]</ref>. Previous work demonstrates interaction scenarios involving mixedinitiative <ref type="bibr" target="#b3">[4]</ref> as well.</p><p>Despite these initial results, some open questions remain and are subject to future work. Regarding the interaction designer role, the interaction language affects potential applications. The proposed discourse model focuses on processoriented applications, like the one's for shopping, and may not be suitable for other types of applications. Further evaluation whether the interaction designer can really design "good" discourses without in-depth knowledge about modalities is under way. Regarding the modality integrator role, the effort for integrating a modality depends on the modality's type. For example, a transformation process for a GUI requires model-to-model transformations and is far more laborious than the barcode reader's integration where only a few method calls have to be programmed.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Conclusion</head><p>In this paper, we introduce a distinction between three roles involved in the design of multimodal UIs. This approach allows a better decoupling between tasks that require different expertise. Tools based on discourse modeling appropriately support this decoupling and facilitate communication between roles. We discuss how this decoupling and the accompanying tools facilitate the design of a more natural human-robot interface and point out the relationship to mixed-initiative and context-awareness. Although future works is necessary for evaluating the simplicity of interaction design and modality integration, this approach already enabled the development of a multimodal UI of a shopping robot.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Fig. 1 .</head><label>1</label><figDesc>Fig. 1. Roles for Multimodal User Interface Design.</figDesc></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">http://julius.sourceforge.jp-visitedatthetimeofthiswriting</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6">Acknowledgement</head><p>This research has been carried out in the CommRob project (http://www. commrob.eu) and is partially funded by the EU (contract number IST-045441 under the 6th framework programme).</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Coupling application design and user interface design</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">J M J</forename><surname>De Baar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">D</forename><surname>Foley</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">E</forename><surname>Mullet</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">CHI &apos;92: Proceedings of the SIGCHI conference on Human factors in computing systems</title>
				<meeting><address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="1992">1992</date>
			<biblScope unit="page" from="259" to="266" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Semi-automatic generation of multimodal user interfaces</title>
		<author>
			<persName><forename type="first">D</forename><surname>Ertl</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">EICS&apos;09: Proceedings of the ACM SIGCHI Symposium on Engineering Interactive Computing Systems</title>
				<meeting><address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Generating an abstract user interface from a discourse model inspired by human communication</title>
		<author>
			<persName><forename type="first">C</forename><surname>Bogdan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Falb</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Kaindl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kavaldjian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Popp</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Horacek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Arnautovic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Szep</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 41th Annual Hawaii International Conference on System Sciences (HICSS-41)</title>
				<meeting>the 41th Annual Hawaii International Conference on System Sciences (HICSS-41)<address><addrLine>Piscataway, NJ, USA</addrLine></address></meeting>
		<imprint>
			<publisher>IEEE Computer Society Press</publisher>
			<date type="published" when="2008-01">January 2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Multimodal communication involving movements of a robot</title>
		<author>
			<persName><forename type="first">H</forename><surname>Kaindl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Falb</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Bogdan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">CHI &apos;08 extended abstracts on human factors in computing systems</title>
				<meeting><address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2008">2008</date>
			<biblScope unit="page" from="3213" to="3218" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Improving user interfaces of interactive robots with multimodality</title>
		<author>
			<persName><forename type="first">M</forename><surname>Vallee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Burger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Ertl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Lerasle</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Falb</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the International Conference on Advanced Robotics</title>
				<meeting>the International Conference on Advanced Robotics<address><addrLine>ICAR</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2009-06">2009. June 2009</date>
			<biblScope unit="page" from="1" to="6" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
