<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">End-User Personalisation of Humanoid Robot Behaviour Through Vocal Interaction</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Simone</forename><surname>Gallo</surname></persName>
							<email>simone.gallo@isti.cnr.it</email>
							<affiliation key="aff0">
								<orgName type="institution">CNR -ISTI</orgName>
								<address>
									<addrLine>Via G. Moruzzi 1</addrLine>
									<postCode>56127</postCode>
									<settlement>Pisa</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="department">-Computer Science Dept</orgName>
								<orgName type="institution">University of Pisa</orgName>
								<address>
									<addrLine>Largo B. Pontecorvo 3</addrLine>
									<postCode>56127</postCode>
									<settlement>Pisa</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Giacomo</forename><surname>Vaiani</surname></persName>
							<email>giacomo.vaiani@phd.unipi.it</email>
							<affiliation key="aff0">
								<orgName type="institution">CNR -ISTI</orgName>
								<address>
									<addrLine>Via G. Moruzzi 1</addrLine>
									<postCode>56127</postCode>
									<settlement>Pisa</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="department">-Computer Science Dept</orgName>
								<orgName type="institution">University of Pisa</orgName>
								<address>
									<addrLine>Largo B. Pontecorvo 3</addrLine>
									<postCode>56127</postCode>
									<settlement>Pisa</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Fabio</forename><surname>Paternò</surname></persName>
							<email>fabio.paterno@isti.cnr.it</email>
							<affiliation key="aff0">
								<orgName type="institution">CNR -ISTI</orgName>
								<address>
									<addrLine>Via G. Moruzzi 1</addrLine>
									<postCode>56127</postCode>
									<settlement>Pisa</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">End-User Personalisation of Humanoid Robot Behaviour Through Vocal Interaction</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">61B24DE298A6ADC9792443613D71ED85</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T17:20+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>End-User Development</term>
					<term>Human-Robot Interaction</term>
					<term>Smart Spaces</term>
					<term>CEUR-WS</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This study explores the integration of Large Language Models with social robots to facilitate End-User Development through natural language interactions. The paper presents a prototype system embodied in a Pepper robot that allows non-expert users to customise robot behaviours by defining personalisation rules via vocal commands. This system employs trigger-action programming, enabling users to create automations based on specific triggers and actions without requiring in-depth technical knowledge. Through an example scenario, we show how users can program the robot by employing voice commands to execute actions when an event occurs. The created automations can also involve available IoT objects. The study investigates the potential of natural language interaction to improve the usability and flexibility of robot programming, offering new possibilities for personalised interactions in various settings.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Over recent years, technological advancements have resulted in the development of increasingly sophisticated robots that are more closely integrated into human daily activities. This evolution is especially noticeable in the realm of social robots. These robots are designed to interact with humans in various social contexts, assisting with a range of tasks, including children's language education <ref type="bibr" target="#b0">[1]</ref>, older adults' cognitive training <ref type="bibr" target="#b1">[2]</ref>, and smart device management <ref type="bibr" target="#b2">[3]</ref>. Moreover, robots could offer advantages over traditional voice assistants, particularly in routine activity detection and support for individuals with functional limitations <ref type="bibr" target="#b3">[4]</ref>[5] <ref type="bibr" target="#b5">[6]</ref>. Several end-user development tools have been introduced to facilitate user engagement and customisation of these robots' behaviour. These tools utilise various paradigms, such as block-based <ref type="bibr" target="#b6">[7]</ref> and natural language programming <ref type="bibr" target="#b7">[8]</ref>, enabling users to compose personalisation rules (for instance, having the robot say something when a person is in front of it or perform specific actions based on vocal commands).</p><p>Recent advancements in artificial intelligence, particularly with Large Language Models (LLMs), have the potential to enhance robots' communicative and operational capabilities. This enables interactions that can resemble human-like conversations and dynamically adapt to enduser requests <ref type="bibr" target="#b8">[9]</ref>. Within this context, trigger-action programming emerges as an effective approach to End-User Development (EUD) in robotic systems. It allows users to define robot behaviours in response to specific events or conditions, offering a user-friendly way to customise robot functionalities without the need for deep technical knowledge.</p><p>This paper presents a prototype of a conversational agent embodied in a Pepper robot that utilises an LLM to assist non-expert users in creating personalisation rules in trigger-action format. Through vocal conversations, users can naturally express their preferences for how they want the robot to act when a specific event occurs. These events can be triggered by interacting with the robot itself (e.g., when the robot recognises a person), or by the surrounding smart environment (e.g., the change of temperature in a room, opening or closing windows/doors). In the following sections, we first introduce related work in the field of trigger-action programming for robot end-user development, and then we delve into the architecture of this prototype and an application scenario. Finally, we discuss the next step for this work.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Related Work</head><p>Various research studies have explored the possibilities of end-user programming of robot behaviour using the triggeraction paradigm, employing diverse interaction modalities and approaches. Leonardi et al. <ref type="bibr" target="#b9">[10]</ref> exploited a graphical web-based wizard interface to enable users without programming skills to define personalisation rules by specifying events and/or conditions (triggers) that, once met, initiate the execution of defined actions. The tool allows the user to select triggers and actions related to smart devices (e.g., the motion detected by a sensor, turning on smart bulbs) and a Pepper robot (e.g., a touch on the robot's head). Thus, it was possible to create automations, such as having Pepper say "Hey, how are you?" when someone entering the room is detected, by combining Internet of Things devices with Pepper. In the present study, we propose the development of automations through direct interaction with the robotic system, as opposed to the utilisation of a separate web-based wizard.</p><p>Another contribution by Porfirio et al. <ref type="bibr" target="#b10">[11]</ref> presents Tabula, a multimodal end-user development system for programming service robots for personal use in domestic and workplace environments. In this case, the system enables users without programming skills to script tasks, defining humanoid robots' behaviour (a Pepper one) using triggeraction programming and combining natural language with sketches on a visual interface to define the automation. In particular, users can utilise natural language commands (via voice) to define triggers and actions. The resulting automation is visualised on a two-dimensional map, displaying the current environment (e.g., the user's house) and the defined path or actions of the robot. This setup allows users to modify or refine the automation or to implement more complex logic that is difficult to express verbally. The implemented prototype encompasses a set of five actions (e.g., moving to a position, saying something) and two events (e.g., a person approaching or speaking to the robot), enabling the creation of automations like "when the user arrives home, the robot goes to the entrance". Although users have considered the approach promising, the system faces challenges in processing natural language input due to difficulties in understanding complex or ambiguous commands, which leads to errors in automation and user frustration.</p><p>Finally, a recent work proposed by Karli et al. <ref type="bibr" target="#b11">[12]</ref>, developed a system integrating ChatGPT to enable end-users to define robot programs (e.g., for defining the movement of robotic harm) using natural language instructions. The system interface presents a chat from which the user sends the inputs, a console showing the generated code and a view of the robot simulator. Beginning with the description of the desired robot behaviour, the user engages in a collaborative process that iteratively defines and debugs the specifications with the system to address the request. While the natural language interaction is effective, the study emphasises several critical points regarding the use of LLMs in this context. It highlights the necessity of enhancing the reliability of LLM-generated code through accurate code verification processes, crafting more effective prompts and adjusting prompts dynamically to better fit the context.</p><p>In general, LLM approaches open new possibilities for applications across a broad range of settings in end-user development, highlighting the potential to enhance the usability and flexibility of robot programming.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Our Approach</head><p>In this proposal, we introduce the combined use of Pepper with an LLM agent, aiming to define the robot's behaviour through specific trigger-action personalisation rules (also called automations) expressed by voice. By integrating an LLM as a natural language processing module, users can communicate with Pepper in a more intuitive and conversational way. This enhancement enables Pepper to process complex commands and questions, significantly improving its usability and interactive capabilities. Furthermore, this system design lets users create automations verbally, eliminating the need for any programming skills. Figure <ref type="figure">3</ref> illustrates the architecture of the designed system.</p><p>Specifically, this prototype aims to create automations that include triggers and actions related to both a smart environment (e.g. a smart home) and the robot. Through these automations, it is possible to define events, conditions, and actions related to both sensors and smart objects (e.g., motion sensors, smart light bulbs, smart thermostats) and Pepper (e.g., recognise a person, display something on its tablet, say something). In this way, the robot becomes part of a smart ecosystem in which it can perform actions in response to triggers related to the environment or be itself the trigger of events for the execution of actions by smart objects. On a technical level, Pepper can be considered a system entity at the same level as sensors and smart objects and can thus be integrated into control systems of smart environments. The automations created are then executed When the user speaks, the robot utilises the Google Speech API to identify the spoken sentence. The system then makes an API call to the Dialogue Manager to process the user message. The API response, which is delivered using Pepper's voice, contains the response message for the user. Once the automation is complete it is saved in a database and executed through the automations manager.</p><p>More generally, our approach opens the possibility of creating automations that involve robots and surrounding connected sensors and objects. Indeed, the created automations can involve both triggers and actions associated with the robot (e.g., when Pepper recognises a specific person, says "Hello [name]" and does a greeting animation), but it is also possible to have triggers activated by external sensors or objects and actions executed by the robot. Vice versa, triggers can be generated by the robot, with actions executed by surrounding objects (e.g., When Pepper detects a negative emotion on the user's face, soft lights turn on, and relaxing music plays.).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Interacting with Pepper</head><p>The application integrates voice and text messaging functionalities, as well as message display editing. The vocal interaction approach utilises Google Cloud's Text-to-Speech (TTS) and Speech-to-Text (STT) APIs. This approach offers a comprehensive solution for adding vocal input and output capabilities, thereby enhancing the user experience by making interactions more natural and engaging.</p><p>Users can activate voice recognition in the robot by pressing a designated button on the interface or touching a part of the robot's body, such as the hand or head. This service converts vocal input into text. Initially, the service starts recording audio via the robot's microphone, visually notifying the user of the recording status with a change in button colour to indicate when the recording is active. Next, the service sends the audio stream to the Google STT service. The speech service promptly notifies the robot application when receiving the input converted into text format, allowing for almost instantaneous processing of the transcribed message. The obtained text can be used for various purposes within the robot application, such as displaying it for the user, sending it to other backend services for further responses or actions, or triggering specific commands based on keywords.</p><p>The robot application's audio output is generated by a TTS functionality that handles authentication to the Google Cloud service, manages synthesis requests, and ultimately plays the resulting audio using a media player. The system saves the synthesised audio to a local file, readying it for playback. This conversion occurs after the message has been processed and sent to the list of displayed messages, ensuring that the user can both see and hear the bot's response. A useful feature introduced is the toggle functionality that allows users to choose between viewing all messages exchanged in the chat and viewing only the last two messages (the application's default view). The toggle is triggered by a method that flips a Boolean value, based on which the adapter decides which view to adopt. This mechanism helps keep the user interface tidy, showing only a part of the messages during vocal interaction but allowing users to view the entire conversation, if desired. This decision was made under the assumption that during a realtime vocal conversation, users do not always need a textual representation of the entire chat.</p><p>Finally, the interface also features a reset function, allowing users to activate a series of actions through a dedicated button to stop any ongoing activity, such as vocal recording or the playback of animations and vocal synthesis. Moreover, this functionality serves to clear the user interface by deleting the message history and returning any input fields or selections to their original state. This reset feature ensures a smooth and intuitive user experience by allowing users to easily reset the application without the need to navigate through complex menus or restart the app entirely.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Dialogue Manager</head><p>The Dialogue Manager serves as the core component for processing natural language inputs and managing conversational flow. When a user engages with the robot, Pepper transcribes the user's speech into text and then sends it to the Dialogue Manager via an HTTP request. Upon receiving the text, the Dialogue Manager (a Flask Python server) forwards the message to the GPT-4 model through the Ope-nAI API. Guided by the instructions in the defined prompt, the model determines the next step in the conversation. It can either execute a function to perform a specific task or directly generate a textual response to the user's query.</p><p>The constructed prompt begins with a description of the role the model is expected to adopt (e.g., "You are Pepper, a humanoid robot... "), which is succeeded by an explanation of the task (e.g., "Your task is to help users create automations... ") and some general guidelines to be adhered to when interacting with the user (e.g., "call the user by the name" or "keep the response short and simple…" ). Then, the prompt introduces the functions the model can use, along with instructions on when and how to use these (e.g., "always use the verify_automation function before saving the automation" ). The function calling functionality, provided by the OpenAI API, enables the LLM to interface with external resources and tools. This is possible by supplying the model with a set of function descriptions and the required parameters for their execution. In particular, we include a function for retrieving the list of possible triggers and actions for defining an automation, a function to verify the correctness of the defined automation, and a function for saving the created automation in a database. When a function is invoked, its output is fed back into the model. This becomes the basis for GPT-4 to generate an appropriate response. For example, the "save automation" functions provide an output message containing the unique automation ID as well as a confirmation message if the operation was successful or an error message otherwise. The model utilises this output to generate the user's response. This response is then dispatched to Pepper, closing the loop of the initial HTTP request. Pepper then verbalises the response, providing the user with an audible answer.</p><p>Example Scenario. Let's consider a usage scenario in which the user talks with Pepper and defines one automation by saying something like: "When I come back home if I'm in a bad mood, say something comforting". Consequently, Pepper will send this sentence to the Dialogue Manager that retrieves the list of possible triggers and actions and proposes the users an initial automation: "We can detect when you are home using the location of your smartphone, and I can detect your mood by your facial expressions. If you are sad or angry, I can put on relaxing music and say something comforting like... ".</p><p>At this point, the user can continue talking with Pepper to refine the proposed automation. Once the user is satisfied with the defined triggers and actions, the Dialogue Manager uses the "verify automation" function to check that the automation contains only available triggers and actions. If an automation is correctly verified, Pepper asks the user if the created automation can be saved. After saving, a confirmation message resumes the created automation along with the assigned ID in the database. Once automation is saved, the Automation Manager module executes the defined actions when the chosen event is triggered, and the defined conditions are eventually met.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Conclusions and Future Work</head><p>This research explores the integration of Large Language Models into the domain of End-User Development for social robots, focusing on enabling users to customise robot behaviours through intuitive, natural language vocal interactions. By implementing a prototype conversational agent embodied in a Pepper robot, we facilitate non-expert users in creating automation for personalising the robot behaviour based on events in a smart environment (e.g., presence detection in a room), or on events on the robot itself (e.g., human face recognition). This approach leverages triggeraction programming, presenting a user-friendly method for customising robot functionalities without requiring technical knowledge. Our proposal contributes to the field by illustrating the practical application of LLMs in enhancing robot usability and flexibility, suggesting a promising avenue for future research and development in social robotics and user-centric automation. For future work, we plan to initially conduct user tests in a controlled environment (e.g., laboratory setting) to evaluate the strengths and weaknesses of our solution in comparison with existing tools based on visual interfaces. User tests in real-world scenarios will follow, addressing the need for realistic, extended evaluations in robotics. Given the focus on the End-User Development approach, it is important to test with users without programming skills and home automation experience.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: System Architecture</figDesc><graphic coords="2,309.59,65.61,213.67,312.08" type="bitmap" /></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">https://www.home-assistant.io/</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">The Effect of a Robot&apos;s Gestures and Adaptive Tutoring on Children&apos;s Acquisition of Second Language Vocabularies</title>
		<author>
			<persName><forename type="first">J</forename><surname>De Wit</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Schodde</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Willemsen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Bergmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Haas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kopp</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Krahmer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Vogt</surname></persName>
		</author>
		<idno type="DOI">10.1145/3171221.3171277</idno>
		<idno>doi:</idno>
		<ptr target="10.1145/3171221.3171277" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, HRI &apos;18</title>
				<meeting>the 2018 ACM/IEEE International Conference on Human-Robot Interaction, HRI &apos;18<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="50" to="58" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">The impact of serious games with humanoid robots on mild cognitive impairment older adults</title>
		<author>
			<persName><forename type="first">M</forename><surname>Manca</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Paternò</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Santoro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Zedda</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Braschi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Franco</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Sale</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.ijhcs.2020.102509</idno>
		<ptr target="https://www.sciencedirect.com/science/article/pii/S1071581920301117.doi:10.1016/j.ijhcs.2020.102509" />
	</analytic>
	<monogr>
		<title level="j">International Journal of Human-Computer Studies</title>
		<imprint>
			<biblScope unit="volume">145</biblScope>
			<biblScope unit="page">102509</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">An Integrated Approach to Human-Robot-Smart Environment Interaction Interface for Ambient Assisted Living</title>
		<author>
			<persName><forename type="first">H.-D</forename><surname>Bui</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">Y</forename><surname>Chong</surname></persName>
		</author>
		<idno type="DOI">10.1109/ARSO.2018.8625821</idno>
		<ptr target="-7576" />
	</analytic>
	<monogr>
		<title level="m">IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO)</title>
				<imprint>
			<date type="published" when="2018">2018. 2018</date>
			<biblScope unit="page">2162</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">A Framework for Service Robots in Smart Home: An Efficient Solution for Domestic Healthcare</title>
		<author>
			<persName><forename type="first">N</forename><surname>Ramoly</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Bouzeghoub</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Finance</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.irbm.2018.10.010</idno>
		<ptr target="https://www.sciencedirect.com/science/article/pii/S1959031818302793.doi:10.1016/j.irbm.2018.10.010" />
	</analytic>
	<monogr>
		<title level="j">IRBM</title>
		<imprint>
			<biblScope unit="volume">39</biblScope>
			<biblScope unit="page" from="413" to="420" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Socially Assistive Robots in Smart Homes: Design Factors that Influence the User Perception</title>
		<author>
			<persName><forename type="first">E</forename><surname>Toscano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Spitale</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Garzotto</surname></persName>
		</author>
		<idno type="DOI">10.1109/HRI53351.2022.9889467</idno>
		<ptr target="https://ieeexplore.ieee.org/document/9889467.doi:10.1109/HRI53351.2022.9889467" />
	</analytic>
	<monogr>
		<title level="m">17th ACM/IEEE International Conference on Human-Robot Interaction (HRI)</title>
				<imprint>
			<date type="published" when="2022">2022. 2022</date>
			<biblScope unit="page" from="1075" to="1079" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Robotenabled support of daily activities in smart home environments</title>
		<author>
			<persName><forename type="first">G</forename><surname>Wilson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Pereyda</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Raghunath</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>De La Cruz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Goel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Nesaei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Minor</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Schmitter-Edgecombe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">E</forename><surname>Taylor</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">J</forename><surname>Cook</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.cogsys.2018.10.032</idno>
		<ptr target="https://www.sciencedirect.com/science/article/pii/S1389041718302651.doi:10.1016/j.cogsys.2018.10.032" />
	</analytic>
	<monogr>
		<title level="j">Cognitive Systems Research</title>
		<imprint>
			<biblScope unit="volume">54</biblScope>
			<biblScope unit="page" from="258" to="272" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Towards a Modular and Distributed End-User Development Framework for Human-Robot Interaction</title>
		<ptr target="https://ieeexplore.ieee.org/document/9323043" />
	</analytic>
	<monogr>
		<title level="j">IEEE Journals &amp; Magazine | IEEE Xplore</title>
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<title level="m" type="main">CAPIRCI: A Multi-modal System for Collaborative Robot Programming</title>
		<author>
			<persName><forename type="first">S</forename><surname>Beschi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Fogli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Tampalini</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-030-24781-2_4</idno>
		<ptr target="http://link.springer.com/10.1007/978-3-030-24781-2_4.doi:10.1007/978-3-030-24781-2_4" />
		<imprint>
			<date type="published" when="2019">2019</date>
			<publisher>Springer International Publishing</publisher>
			<biblScope unit="volume">11553</biblScope>
			<biblScope unit="page" from="51" to="66" />
			<pubPlace>Cham</pubPlace>
		</imprint>
		<respStmt>
			<orgName>Computer Science</orgName>
		</respStmt>
	</monogr>
	<note>Lecture Notes in</note>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<title level="m" type="main">Chat-GPT for Robotics: Design Principles and Model Abilities</title>
		<author>
			<persName><forename type="first">S</forename><surname>Vemprala</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Bonatti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Bucker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Kapoor</surname></persName>
		</author>
		<idno type="DOI">10.48550/arXiv.2306.17582</idno>
		<idno type="arXiv">arXiv:2306.17582</idno>
		<ptr target="http://arxiv.org/abs/2306.17582.doi:10.48550/arXiv.2306.17582" />
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Trigger-Action Programming for Personalising Humanoid Robot Behaviour</title>
		<author>
			<persName><forename type="first">N</forename><surname>Leonardi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Manca</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Paternò</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Santoro</surname></persName>
		</author>
		<idno type="DOI">10.1145/3290605.3300675</idno>
		<ptr target="https://dl.acm.org/doi/10.1145/3290605.3300675.doi:10.1145/3290605.3300675" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, CHI &apos;19</title>
				<meeting>the 2019 CHI Conference on Human Factors in Computing Systems, CHI &apos;19<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="1" to="13" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Sketching Robot Programs On the Fly</title>
		<author>
			<persName><forename type="first">D</forename><surname>Porfirio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Stegner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Cakmak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Sauppé</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Albarghouthi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Mutlu</surname></persName>
		</author>
		<idno type="DOI">10.1145/3568162.3576991</idno>
		<idno>doi:10.1145/3568162. 3576991</idno>
		<ptr target="https://dl.acm.org/doi/10.1145/3568162.3576991" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, HRI &apos;23</title>
				<meeting>the 2023 ACM/IEEE International Conference on Human-Robot Interaction, HRI &apos;23<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="584" to="593" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Alchemist: LLM-Aided End-User Development of Robot Applications</title>
		<author>
			<persName><forename type="first">U</forename><forename type="middle">B</forename><surname>Karli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J.-T</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">N</forename><surname>Antony</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C.-M</forename><surname>Huang</surname></persName>
		</author>
		<idno type="DOI">10.1145/3610977.3634969</idno>
		<idno>doi:</idno>
		<ptr target="10.1145/3610977.3634969" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, HRI &apos;24</title>
				<meeting>the 2024 ACM/IEEE International Conference on Human-Robot Interaction, HRI &apos;24<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2024">2024</date>
			<biblScope unit="page" from="361" to="370" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
