<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Human-Robot Interaction through End-User Development and Large Language Models</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Luigi Gargioni</string-name>
          <email>luigi.gargioni@unibs.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Human-Robot Interaction, Human-Computer Interaction, End-User Development, Large Language Models</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Brescia - Department of Information Engineering</institution>
          ,
          <addr-line>Via Branze 38, Brescia, 25123</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <fpage>6</fpage>
      <lpage>10</lpage>
      <abstract>
        <p>This paper presents a PhD research project aimed at enhancing Human-Robot Interaction (HRI) by empowering non-technical users in both the definition and execution of robot tasks. The project addresses two main objectives: (i) enabling end users, such as domain experts without programming skills, to define, validate, and modify robot tasks through End-User Development (EUD) enhanced by Large Language Models (LLMs); and (ii) improving robot adaptability during collaborative task execution by integrating human cognitive factors (e.g., beliefs, desires, and intentions) into robot decision-making. The findings aim to contribute to the ongoing shift toward more adaptable and human-aware robotic systems.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The field of robotics is currently in a rapid expansion, driven by significant technological advancements
across multiple disciplines. In industrial automation, innovations in control systems, materials, and
technical standards are transforming manufacturing processes. Robots in this domain are capable
of executing highly precise and repeatable operations at high speed over long periods, making them
ideal for tasks such as precision welding, heavy load handling, and complex assembly line operations.
In parallel, the rise of collaborative robots (cobots) is reshaping the way humans and robots interact.
Designed to operate safely alongside human workers without the need for physical barriers, cobots are
enabling more flexible, adaptive, more sustainable and human-centric manufacturing environments [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>
        Furthermore, recent advances in Artificial Intelligence (AI) have introduced a new wave of
computational capabilities to mobile robotics [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], including machine learning, automated planning, and computer
vision. These developments have enabled mobile robots to autonomously navigate both known and
unknown environments. Applications include hazardous area exploration, such as radioactive zones,
and object recognition in sensitive contexts like airports.
      </p>
      <p>
        Another area undergoing significant transformation is social robotics [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. This is largely pushed by
the advent of Large Language Models (LLMs), which have acquired conversational abilities far beyond
traditional systems. Social robots are now capable of engaging in human-like interactions aimed at
acting as a conversational partner, supporting mental well-being, and facilitating cognitive exercises.
      </p>
      <p>While the robotics field has shown impressive progress and earned substantial attention, challenges
persist, particularly in the domains of collaborative and social robotics, where interaction with humans
remains a central aspect. Human-Robot Interaction (HRI) occurs at multiple levels: from task definition
to real-time task execution and adaptation. These interactions often require a robot to understand and
respond to human actions, preferences, and emotional states, for which traditional model-based robotic
systems are not well suited.</p>
      <p>In collaborative robotics, one key challenge lies in enabling domain experts to define the tasks that
robots must perform. Given the increasing versatility of collaborative robots, such systems must be
adaptable to frequent changes in production workflows. This demand aligns with the principles of</p>
      <p>CEUR
Workshop</p>
      <p>ISSN1613-0073</p>
      <p>
        End-User Development (EUD) [
        <xref ref-type="bibr" rid="ref4 ref5">4, 5</xref>
        ], which promotes empowering non-expert users to modify or
extend technological artefacts. In this context, end-user robot programming has emerged as a growing
ifeld of research, aiming to make robotic systems programmable by users without formal expertise in
robotics. However, as highlighted in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], programming robots involves additional complexity compared
to traditional software, due to the need to reference physical objects and locations, ensure safe operation,
and coordinate physical movement in shared environments with humans. Two main approaches have
been identified in end-user robot programming: (i) programming by demonstration [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], which requires
multiple iterations for skill generalization, and (ii) visual programming interfaces [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], designed to lower
the cognitive and technical barriers for users. In parallel, the ubiquity of natural language in human
communication has inspired eforts to enable natural language programming for robots. This paradigm
has been applied to tasks such as navigation, assembly, and manipulation in industrial contexts [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], and
is now being explored for social and service robotics [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
      </p>
      <p>
        Although HRI shares many objectives with Human-Computer Interaction (HCI), some of the main
challenges in building a human-aware collaboration with robots come from the field of autonomous
robotics. These challenges often involve how robots make decisions and interpret their surroundings,
areas traditionally explored through research on autonomous behavior and artificial cognition [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ].
Several studies have addressed the architectural and cognitive challenges involved in designing adaptive
and interactive robotic systems [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. These systems typically integrate decisional components
responsible for high-level reasoning, such as planning and situation assessment, with functional components
that manage real-time perception and action [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. Despite this progress, current architectures often
depend on predefined models of human mental states, like beliefs, desires, and intentions, limiting their
ability to adapt to unpredictable behavior. This model-based limitation introduces uncertainty and
hinders the robot’s capacity to anticipate or respond to nuanced human actions. Building accurate,
generalizable models of human cognition remains an open challenge.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Research Objectives and Contributions</title>
      <p>Building on the themes discussed above, this PhD project is structured around two main research
objectives. Objective 1 was explored in collaboration with Prof. Daniela Fogli and Prof. Pietro Baroni at
the University of Brescia. Objective 2 was developed during a research visit period at CNRS-LAAS in
Toulouse (France), under the supervision of Dr. Rachid Alami, and with continued support from Prof.
Daniela Fogli.</p>
      <sec id="sec-2-1">
        <title>2.1. Objective 1: End-User Development for Robot Task Definition</title>
        <p>
          Starting from the study reported in [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ], this objective aimed to explore and support the ability of
end users to autonomously define, edit, and verify robotic tasks. This encompassed not only the
specification of task logic but also the identification and modeling of relevant domain items of the
real-world environment. This objective is based on the idea that giving end users more control can make
robotic systems easier to use and more adaptable to diferent and dynamic situations. Specifically, the
goal is to investigate design approaches and interaction paradigms that facilitate such empowerment,
including natural language interfaces and multimodal interaction techniques. Key aspects of interest
include:
1. Supporting the definition of both tasks and domain elements directly by end users, reducing
dependency on technical developers.
2. Exploring the viability and limitations of natural language as an approach for task specification.
3. Evaluating the efectiveness of multimodal interaction in enhancing the end-user experience
across the diferent stages of the task definition workflow.
        </p>
        <sec id="sec-2-1-1">
          <title>2.1.1. Methodology</title>
          <p>• RQ1: To what extent can domain experts define domain-specific tasks and items through dedicated
user interfaces designed for non-programmers?
• RQ2: What are the capabilities and limitations of natural language interfaces in supporting
users to accurately specify, validate, and refine structured robotic tasks, and how can multimodal
interaction enhance this process during task definition?</p>
          <p>To address the research questions, several interaction design activities were carried out. Whenever
feasible, the investigations focused on use cases in the healthcare domain, particularly on galenic
formulations, which are personalized medications manually prepared by pharmacists to meet specific
patient needs, such as allergies, tailored dosages, or specific administration forms.</p>
          <p>
            The healthcare domain was selected as a reference context since this PhD project is part of the
Technology for Health doctoral program at the University of Brescia. The interaction design activities
revolved around two main projects:
• Project 1 - Preparation of Personalized Medicines through Collaborative Robots: In [
            <xref ref-type="bibr" rid="ref15">15</xref>
            ], a
survey was conducted to investigate how medications are organized and dispensed in pharmacies
and patients’ homes, with a specific focus on the use of pill dispensers. This initial analysis
ofered a contextual analysis of pharmacy workflows and provided the opportunity to explore
the feasibility of a future end-to-end service for galenic preparation and delivery. Subsequently,
the next step of this activity (related paper under review) focuses on the application of a
humancentred methodology adopted to design an EUD environment by involving representative end
users (experts in the pharmaceutical sector) from system ideation to its evaluation. The study
identified repetitive and low-value tasks that could be efectively delegated to collaborative robots,
allowing pharmacists to focus on higher-priority activities. The primary conclusions of the
work pertain to the design implications associated with AI-enabled EUD for collaborative robots.
Based on these findings, in [
            <xref ref-type="bibr" rid="ref16">16</xref>
            ], low-fidelity prototypes of an interactive system were developed.
Following the emerged implications and the designed mockups, a web-based prototype has been
developed [
            <xref ref-type="bibr" rid="ref17 ref18">17, 18</xref>
            ]. This application is called PRAISE (Pharmaceutical Robotic and AI System for
End users). The purpose of the application is to provide support to end users (i.e., pharmacists)
in defining robot programs that are suitable for the specific case of galenic preparations. The
application is conceived as an EUD environment, which implements a hybrid interaction approach
based on a natural language interface leveraging LLMs and a domain-oriented graphical interface
to check and revise the robot programs created. A user study conducted with nine pharmacists
has demonstrated the validity of the approach with positive feedback.
• Project 2 - Natural Language and Multimodal Interfaces for End-User Robot
Programming: The work reported in [
            <xref ref-type="bibr" rid="ref19">19</xref>
            ] explores the potential of OpenAI ChatGPT in enhancing natural
language processing understanding. This work is based on the starting point of the PhD project,
namely the CAPIRCI application reported in [
            <xref ref-type="bibr" rid="ref14">14</xref>
            ]. CAPIRCI is designed to support non-technical
users in defining robot tasks. The work introduced the possibility for users to verify and modify
natural language-generated programs using a block-based interface, with the aim of enhancing
transparency and control. The goal of this work was to begin an exploration of the capabilities of
the new technologies based on LLMs in identifying user intents and structuring the desired output.
Extending the previous work, [
            <xref ref-type="bibr" rid="ref20">20</xref>
            ] presented a prototype environment that allows users with no
technical background to define domain-specific elements (objects, actions, locations) and create
pick-and-place tasks using a combination of natural language and graphical programming via
Blockly. The goal was to assess the feasibility of multimodal interaction in defining and revising
robot tasks. As a continuation of the previous work, a new prototype (related paper under review)
integrated a digital twin into the application. Once a task is defined, it can be simulated virtually,
allowing users to preview the robot’s behavior and identify potential execution issues before
deployment.
          </p>
        </sec>
        <sec id="sec-2-1-2">
          <title>2.1.2. Main Findings</title>
          <p>These projects advance the field of end-user robot programming by introducing a human-centered
AI approach that integrates LLMs, multimodal interaction, and digital twins. The resulting workflow
empowers domain experts to conceptualize, validate, and refine robot tasks through intuitive, accessible
tools, without requiring programming or robotics expertise.</p>
          <p>Central to the methodology is the active involvement of end users throughout the design process,
ensuring that the environment aligns with their practices, expectations, and cognitive models.</p>
          <p>The activities carried out aim to demonstrate that, when properly supported, non-technical users can
efectively engage in robot task definition, ultimately enabling more adaptive, usable, and trustworthy
robotic systems in real-world environments.</p>
        </sec>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Objective 2: Human-Robot Collaboration for Task Accomplishment</title>
        <p>The second objective of the PhD project was to investigate how a robot can efectively collaborate with
a human partner by taking into account human beliefs, intentions, and emotional states during task
execution. This goal addresses the limitations of rigid model-based approaches, which often fall short
when facing the complexity and unpredictability of real-world human behavior.</p>
        <p>Key challenges addressed in this objective include:
1. Overcoming the limitations of traditional model-based approaches to robotics when interacting
with humans, whose behavior is often non-deterministic and emotionally driven.
2. Maintaining a robust and rigorous task planning framework, even in dynamic and uncertain
environments.</p>
        <sec id="sec-2-2-1">
          <title>2.2.1. Methodology</title>
          <p>To guide this investigation, the following research questions were formulated:
• RQ3: How can robots adapt to the unpredictable behavior of human partners in collaborative
scenarios, while maintaining task control and ensuring safety during execution?</p>
          <p>
            To address RQ3, a hybrid system architecture was designed that integrates deterministic techniques
(e.g., task planning and rule-based approach) with the flexibility of LLMs. The preliminary
conceptualization was introduced in [
            <xref ref-type="bibr" rid="ref21">21</xref>
            ], and a complete description of the architecture, along with initial
evaluations, is provided in a paper currently under review.
          </p>
          <p>The approach is based on the concept of shared goals between the human and the robot. A task is
defined as a cooperative activity aimed at achieving a mutually agreed-upon goal.</p>
          <p>The assumed application context is a coaching scenario where the robot assists a person in completing
a task composed of several activities. To illustrate this scenario, it can be considered a patient in a
care facility who is required to take medications and engage in both cognitive and physical activities.
Efective collaboration requires adapting the robot’s actions not only to the requirements of the task
but also to the human’s evolving behavior, preferences, and afective state.</p>
          <p>The proposed architecture is modular and designed to ensure seamless collaboration, adaptive
behavior, and safe execution across the following components:
• Natural Language Understanding and Knowledge Acquisition: Interaction begins with natural
language input from the user, which is transformed into semantic vector embeddings and stored
in a dedicated Vector Database.
• Human-Robot Task Synthesizer : This module, based on an LLM, processes user input and
contextual data to generate a structured task representation, capturing goals, constraints, and required
actions.
• Situation Assessment &amp; Human-Robot Task Progress: By integrating prior knowledge, task history,
and current context, this module dynamically updates the task plan in response to environmental
or human-driven changes. Also, this module is based on an LLM. It ensures continual alignment
between system behavior and human expectations.
• Event Planner &amp; Next Steps: Responsible for monitoring task progression and deciding
subsequent actions, this component checks for discrepancies between expected and actual outcomes,
triggering feedback loops when necessary.
• Robot Perception &amp; Robot Efector : These modules ensure the system’s responsiveness and safety.</p>
          <p>The perception component continuously interprets both the task state and the human’s behavior,
leveraging a Visual Language Model (VLM), while the efector module governs physical actions
and verbal communication, maintaining alignment with user preferences and situational needs.</p>
        </sec>
        <sec id="sec-2-2-2">
          <title>2.2.2. Main Findings</title>
          <p>The developed architecture contributes to the advancement of HRI by blending deterministic techniques
with the contextual flexibility ofered by LLMs. The system is capable of adapting robot behavior in
real time, maintaining alignment with human intentions and adjusting to unexpected changes in the
environment or in user input.</p>
          <p>Empirical case studies confirm the system’s capacity to interpret human-provided goals in natural
language, generate coherent and context-aware task plans, dynamically adapt execution strategies in
response to ongoing interaction, and ensure safety and reliability despite behavioral uncertainty on the
human side.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Discussion and Future Works</title>
      <p>While the work presented in this paper introduces promising approaches to empowering end users
in robot programming and enhancing human-robot collaboration, some limitations remain that ofer
implications for future development.</p>
      <p>A first limitation relates to the realism of the deployed environments. While the digital twin
component introduces a useful simulation layer, it currently simplifies many aspects of the real-world
manipulation tasks. Moving forward, a more realistic digital twin, capable of simulating diverse objects
and conditions (e.g., diferent objects, workspace constraints, and safety considerations), is needed to
better reflect the complexity of actual work settings. Similarly, future user testing should incorporate
real objects, such as test tubes and pharmaceutical tools, to more closely replicate the environment in
which pharmacists operate and to validate the robustness of the task definition interface under practical
constraints.</p>
      <p>In the case of the hybrid architecture for collaborative task execution, current experiments rely
on partially abstracted or scripted interaction scenarios. A natural next step is to bring this system
into more defined and dynamic use cases (e.g., many activities with diferent priorities and schedule
constraints, also considering human preferences), where human behavior is less predictable and the
system’s adaptability can be more rigorously challenged.</p>
      <p>Finally, while recent advances in LLMs ofer new opportunities for natural and flexible interaction,
they must be approached with critical awareness. In the projects presented in this paper, LLMs are never
used as-is; instead, they are systematically complemented by additional technologies and methodologies
designed to interpret, verify, and ground user input. This hybrid strategy is essential to mitigate the
known limitations of LLMs, such as inconsistencies or safety concerns, and to ensure that human-robot
collaboration remains robust, reliable, and context-aware.</p>
      <p>Overall, this work puts efort into enabling more human-aware and intelligent robotic systems. By
demonstrating the feasibility of key concepts, this work provides a solid foundation for future
advancements in the field. Building on these results, the next steps will involve refining the approach through
real-world testing, enhancing simulation environments, and fostering continuous user engagement to
ensure robust and practical deployment.</p>
      <p>Declaration on Generative AI
During the preparation of this work, the author used ChatGPT and Grammarly in order to: grammar and spelling check,
paraphrase, and reword. After using these services, the author reviewed and edited the content as needed, thus, he takes full
responsibility for the publication’s content.</p>
      <p>Funding
The PhD scholarship is co-funded by the Italian Ministry, Piano Nazionale di Ripresa e Resilienza (PNRR) and Antares Vision.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>L.</given-names>
            <surname>Gargioni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Fogli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Baroni</surname>
          </string-name>
          ,
          <article-title>A meta-design approach to collaborative robotics to achieve sustainability goals</article-title>
          ,
          <source>CEUR-WS</source>
          <volume>3978</volume>
          (
          <year>2025</year>
          )
          <article-title>8</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>S. G.</given-names>
            <surname>Tzafestas</surname>
          </string-name>
          ,
          <article-title>Mobile robot control and navigation: A global overview</article-title>
          ,
          <source>Journal of Intelligent &amp; Robotic Systems</source>
          <volume>91</volume>
          (
          <year>2018</year>
          )
          <fpage>35</fpage>
          -
          <lpage>58</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>H.</given-names>
            <surname>Mahdi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. A.</given-names>
            <surname>Akgun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Saleh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Dautenhahn</surname>
          </string-name>
          ,
          <article-title>A survey on the design and evolution of social robots-past, present and future</article-title>
          ,
          <source>Robotics and Autonomous Systems</source>
          <volume>156</volume>
          (
          <year>2022</year>
          )
          <fpage>104193</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>H.</given-names>
            <surname>Lieberman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Paternò</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Wulf</surname>
          </string-name>
          ,
          <article-title>End User Development (Human-Computer Interaction Series</article-title>
          ), Springer-Verlag, Berlin, Heidelberg,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>B. R.</given-names>
            <surname>Barricelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Cassano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Fogli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Piccinno</surname>
          </string-name>
          ,
          <article-title>End-user development, end-user programming and end-user software engineering: A systematic mapping study</article-title>
          ,
          <source>Systems and Software</source>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>G.</given-names>
            <surname>Ajaykumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Steele</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.-M. Huang</surname>
          </string-name>
          ,
          <article-title>A survey on end-user robot programming</article-title>
          ,
          <source>ACM Comput. Surv</source>
          .
          <volume>54</volume>
          (
          <year>2021</year>
          ). doi:
          <volume>10</volume>
          .1145/3466819.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Kan</surname>
          </string-name>
          ,
          <article-title>Skill transfer learning for autonomous robots and human-robot cooperation: A survey, Robotics</article-title>
          and Autonomous
          <string-name>
            <surname>Systems</surname>
          </string-name>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A.</given-names>
            <surname>Schoen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>White</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Henrichs</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Siebert-Evenstone</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Shafer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Mutlu</surname>
          </string-name>
          ,
          <article-title>Coframe: A system for training novice cobot programmers</article-title>
          ,
          <source>in: HRI</source>
          ,
          <year>2022</year>
          . doi:
          <volume>10</volume>
          .1109/HRI53351.
          <year>2022</year>
          .
          <volume>9889345</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>D. K.</given-names>
            <surname>Misra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Sung</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Saxena</surname>
          </string-name>
          ,
          <article-title>Tell me dave: Context-sensitive grounding of natural language to manipulation instructions</article-title>
          ,
          <source>Int. J. Rob. Res</source>
          . (
          <year>2016</year>
          ). doi:
          <volume>10</volume>
          .1177/0278364915602060.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>S.</given-names>
            <surname>Gallo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Vaiani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Paternò</surname>
          </string-name>
          ,
          <article-title>End-user personalisation of humanoid robot behaviour through vocal interaction</article-title>
          , in: Workshop on Robots for Humans, volume
          <volume>3794</volume>
          , CEUR-WS.org,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>S.</given-names>
            <surname>Rossi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Ferland</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. Tapus,</surname>
          </string-name>
          <article-title>User profiling and behavioral adaptation for hri: A survey</article-title>
          ,
          <source>Pattern Recognition Letters</source>
          <volume>99</volume>
          (
          <year>2017</year>
          )
          <fpage>3</fpage>
          -
          <lpage>12</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>R.</given-names>
            <surname>Alami</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Clodic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Montreuil</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. A.</given-names>
            <surname>Sisbot</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Chatila</surname>
          </string-name>
          ,
          <article-title>Toward human-aware robot task planning</article-title>
          .,
          <source>in: AAAI spring symposium</source>
          ,
          <year>2006</year>
          , pp.
          <fpage>39</fpage>
          -
          <lpage>46</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>P.</given-names>
            <surname>Foggia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Greco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Roberto</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Saggese</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Vento</surname>
          </string-name>
          ,
          <article-title>A social robot architecture for personalized real-time human-robot interaction</article-title>
          ,
          <source>IEEE Internet of Things Journal</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>D.</given-names>
            <surname>Fogli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Gargioni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Guida</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Tampalini</surname>
          </string-name>
          ,
          <article-title>A hybrid approach to user-oriented programming of collaborative robots</article-title>
          ,
          <source>Robotics and Computer-Integrated Manufacturing</source>
          <volume>73</volume>
          (
          <year>2022</year>
          )
          <fpage>102234</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>L.</given-names>
            <surname>Gargioni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Fogli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Baroni</surname>
          </string-name>
          ,
          <article-title>A systematic review on pill and medication dispensers from a human-centered perspective</article-title>
          ,
          <source>Journal of healthcare informatics research 8</source>
          (
          <year>2024</year>
          )
          <fpage>244</fpage>
          -
          <lpage>285</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>L.</given-names>
            <surname>Gargioni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Fogli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Baroni</surname>
          </string-name>
          ,
          <article-title>Designing human-robot collaboration for the preparation of personalized medicines</article-title>
          ,
          <source>GoodIT '23</source>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          ,
          <year>2023</year>
          , p.
          <fpage>135</fpage>
          -
          <lpage>140</lpage>
          . doi:
          <volume>10</volume>
          .1145/3582515.3609527.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>L.</given-names>
            <surname>Gargioni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Fogli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Baroni</surname>
          </string-name>
          ,
          <article-title>Exploring the adoption of collaborative robots for the preparation of galenic formulations</article-title>
          ,
          <source>Information</source>
          <volume>16</volume>
          (
          <year>2025</year>
          )
          <fpage>559</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>L.</given-names>
            <surname>Gargioni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Fogli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Baroni</surname>
          </string-name>
          ,
          <article-title>Preparation of personalized medicines through collaborative robots: A hybrid approach to the end-user development of robot programs</article-title>
          ,
          <source>ACM Journal on Responsible Computing</source>
          (
          <year>2025</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>G.</given-names>
            <surname>Bimbatti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Fogli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Gargioni</surname>
          </string-name>
          , et al.,
          <article-title>Can chatgpt support end-user development of robot programs?</article-title>
          ,
          <source>in: CEUR Workshop Proceedings</source>
          , volume
          <volume>3408</volume>
          ,
          <year>2023</year>
          , p.
          <fpage>8</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>L.</given-names>
            <surname>Gargioni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Fogli</surname>
          </string-name>
          ,
          <article-title>Integrating chatgpt with blockly for end-user development of robot tasks</article-title>
          ,
          <source>HRI '24</source>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New York, NY, USA,
          <year>2024</year>
          , p.
          <fpage>478</fpage>
          -
          <lpage>482</lpage>
          . doi:
          <volume>10</volume>
          .1145/3610978.3640653.
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>L.</given-names>
            <surname>Gargioni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Alami</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Fogli</surname>
          </string-name>
          ,
          <article-title>Towards a hybrid llm/model-based architecture for robot coaching: An instance of human-machine collaboration</article-title>
          ,
          <source>in: CEUR Workshop Proceedings, CEUR-WS</source>
          ,
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>