<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Negotiation in Intent-Based Interactions: Bridging the User Interface Accessibility and Usability Gap with LLMs</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Qi Ai</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Advisors: Prof. Maristella Matera, Dr. Micol Spitale</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Politecnico di Milano</institution>
          ,
          <addr-line>DEIB, Piazza Leonardo da Vinci, 32, Milano, 20133</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <fpage>6</fpage>
      <lpage>10</lpage>
      <kwd-group>
        <kwd>Usability</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Background</title>
      <p>
        Due to limitations in cognitive, linguistic, and perceptual abilities, individuals with Intellectual
Disabilities (ID) face substantial barriers when interacting with digital systems [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. These challenges
include dificulties in understanding abstract symbols, executing multi-step tasks, interpreting dynamic
interfaces, and handling error feedback [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. While the W3C Web Content Accessibility Guidelines
(WCAG) recommend principles such as simplified language, consistent navigation, and error prevention
[
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], they mainly address static Web content and generalized user models, often overlooking the diverse
abilities and evolving needs of users with ID. As a result, despite the increasing ubiquity of digital
technologies, many systems remain inaccessible or inefective for this population [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>
        Large Language Models (LLMs), with their advanced capabilities in natural language understanding,
generation, and reasoning, have the potential to revolutionize user interfaces and lay the foundation for
intent-based interaction paradigms [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. In such paradigms, users express their goals or needs through
natural language, while the system interprets and acts on these inputs, abstracting away technical
complexity. This shifts the user focus from
      </p>
      <p>
        how to perform a task to articulating what they want to
achieve [
        <xref ref-type="bibr" rid="ref5 ref6">5, 6</xref>
        ]. This evolution creates unprecedented opportunities for individuals with ID to engage
meaningfully with intelligent agents, digital services, and low-code platforms. It empowers them to
perform daily tasks independently and manage their smart environments, thereby enhancing autonomy
and reducing reliance on caregivers [
        <xref ref-type="bibr" rid="ref4 ref7">4, 7</xref>
        ].
      </p>
      <p>
        Nevertheless, relying solely on natural language is often insuficient to bridge the gap between users’
interpret ambiguous commands (e.g., ”make the room warmer”) due to the inherent vagueness of natural
language [
        <xref ref-type="bibr" rid="ref8 ref9">8, 9</xref>
        ]. Another concern is that LLMs may produce non-deterministic, flawed, or hallucinated
outputs that non-experts often struggle to evaluate and correct, thereby posing significant risks to user
      </p>
      <p>CEUR
Workshop</p>
      <p>
        ISSN1613-0073
experience, privacy, and security [
        <xref ref-type="bibr" rid="ref10 ref11">10, 11</xref>
        ]. These risks may be further exacerbated when user knowledge
or preferences conflict with system safety requirements [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ].
      </p>
      <p>
        In recent years, negotiation mechanisms have gained attention as a promising means of fostering
mutual understanding in human–AI interactions, highlighting their potential to address the
aforementioned challenges with innovative and practical solutions. Studies show that LLMs such as ChatGPT-4,
with appropriate prompting, can efectively identify ambiguities and errors [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. Through iterative
user feedback, the model can progressively refine its understanding, enhance self-repair capabilities,
and improve output quality over successive rounds [14]. Consequently, interactive negotiation for
clarifying ambiguities, resolving conflicts, and facilitating collaborative decision-making is regarded as
more reliable and beneficial for enhancing user satisfaction than direct generation approaches [
        <xref ref-type="bibr" rid="ref13">13, 15</xref>
        ].
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Works</title>
      <sec id="sec-2-1">
        <title>2.1. Intent Disambiguation and Multimodal Interaction</title>
        <p>
          Existing research has proposed disambiguation strategies for conversations with AI chatbots, such
as follow-up clarifications, rephrasing, and suggestive questions [ 16]. Other approaches aim to elicit
user mental models through compositional paradigms, such as the Rule_5W framework, which helps
users define articulated rules [ 17]. However, these methods often rely on specific communication skills,
including grammatical, sociolinguistic, discourse, and strategic abilities [18, 19], which may not match
the profiles of individuals with ID. In addition, limited abstract reasoning and dificulties translating
intentions into structured commands further hinder their ability to design prompts and interact with
systems efectively [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ], potentially exacerbating their marginalization in digital technologies [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ].
        </p>
        <p>
          A growing body of research highlights the value of multimodal interfaces in supporting both the
interpretation and expression of user intents [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]. For example, platforms such as IFTTT, Atooma, and
Locale use icon-based visual languages to represent rules, enabling end-users to manage and customize
their IoT-enabled smart environments more intuitively [17, 20] (e.g., see Figure 1). Expanding on
this approach, Calò and De Russis combined LLMs with visual cues to clarify commands and capture
nuanced user intents [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] (see Figure 2).
        </p>
        <p>For individuals with neurodevelopmental disorders, research indicates that incorporating embodied
and multimodal interaction into technology design can reduce cognitive load and enhance information
comprehensibility [21]. Intelligent personal assistants that integrate voice commands with adaptive
interfaces have proven efective in supporting daily tasks and independence in this population [ 22].
Spitale et al. further demonstrated that physically socially-assistive robots outperform virtual ones in
boosting language skills and engagement during speech therapy for children with language impairments
[23]. Additionally, Morra et al. [24] developed a tangible toolkit that enables users with ID to engage in
technology creation through physical manipulation and visual–tactile afordances, thereby improving
system usability and fostering skill development, self-confidence, and autonomy [ 25] (see Figure 3).</p>
        <p>Creating Rule
Events</p>
        <p>JustAwake
from 07 a.m. to 09 a.m.
or</p>
        <p>AlarmRinging
from 07 a.m. to 09 a.m.</p>
        <p>Add a new Event</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Negotiation in Human-Agent Dialogues</title>
        <p>Recent breakthroughs in Generative AI have spurred advancements in human-machine negotiation
dialogue systems [26]. These systems typically involve goal-oriented, multi-turn interactions between
humans and dialogue agents [27] (see Figure 4). Their core mechanisms integrate logical reasoning,
dynamic strategies (e.g., argumentation, persuasion, confrontation, and compromise), and psychological
factors to reach mutually acceptable solutions through strategic information exchange [28] (see Figure 5).</p>
        <p>Negotiation strategies for dialogue agents are commonly categorized into integrative, distributive,
and multi-party types [28]. Integrative negotiation promotes mutual gain through eliciting preferences,
empathy, and coordinated proposals [26]. Distributive negotiation aims to maximize unilateral interests,
employing tactics like contesting, empowerment, biased processing, and avoidance [29]. Multi-party
negotiation requires modeling group dynamics, often addressed through reinforcement learning [ 30] or
graph neural networks [31] to analyze complex subgroup interactions.</p>
        <p>Beyond strategic considerations, the personality traits of negotiators also play a critical role in
the negotiation process [28]. This involves mind modeling, which includes assessing psychological
preferences, inferring intent, and predicting responses by mapping utterances to dialogue behaviors. It
also involves understanding the emotional dynamics between negotiators, often subjectively measured
by outcome satisfaction and partner perception [26].</p>
        <p>Negotiation Cycle</p>
        <p>Preference
Negotiator</p>
        <p>A
Strategy</p>
        <p>Potential
Outcome
Negotiations
Outcome
Interaction</p>
        <p>Preference
Negotiator</p>
        <p>B
Strategy</p>
        <p>Deal Accepted
Information Exchange</p>
        <p>Not Accepted
Conversational Agent</p>
        <p>Human</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Research Objectives</title>
      <p>
        Although LLMs can understand complex instructions and generate coherent and contextually relevant
responses, current LLM-based systems still struggle with ambiguous intents, conflicting user preferences,
and consistent reliability [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. Moreover, existing prompt and interface designs often fail to account for
the communication, cognitive, and operational limitations of individuals with ID, resulting in reduced
usability and compromised safety [
        <xref ref-type="bibr" rid="ref1 ref4">1, 4</xref>
        ]. This underscores the urgent need for innovative approaches
that enable more inclusive technology experiences.
      </p>
      <p>This research will explore the synergy between iterative negotiation strategies and multimodal
interaction paradigms, and investigate how they can be efectively integrated into LLM-driven
conversational agents. The goal is to enable collaborative interactions that harmonize user intent with agent
capabilities, allowing both parties to jointly shape outcomes, thereby enhancing the user’s sense of
control and trust. Ultimately, the research seeks to improve the accessibility and usability of digital
systems and end-user development platforms [33], with a specific focus on supporting individuals
with ID as a key beneficiary group by delivering more reliable, accurate, and user-centered interaction
outcomes. To achieve these objectives, this research will address the following questions:
• What core barriers do individuals with ID face in intent-based human–agent interactions? What
types of tasks and preferences do they commonly exhibit?
• Which multi-turn negotiation strategies efectively clarify ambiguous intents and resolve conflicts
between user preferences and safety needs?
• How can query prompts in human-agent negotiation dialogues be categorized to align with
specific disambiguation goals and collaborative decision-making contexts?
• Which multimodal interaction paradigms support users with ID in expressing their intent and
interpreting system responses during iterative negotiation with LLM-based conversational agents?</p>
    </sec>
    <sec id="sec-4">
      <title>4. Methodology and Early Progress</title>
      <p>As part of the preliminary research, a systematic review is being conducted to establish a theoretical
foundation for understanding iterative negotiation between humans and conversational AI agents in
the context of HCI. This review aims to identify research gaps, track emerging trends, and inform the
design of subsequent experiments.</p>
      <p>Next, the study will employ Research through Design [34] and Participatory Design methods [35],
engaging users with ID in iterative prototyping, evaluation, and refinement of multimodal interfaces
with multi-round negotiation strategies through workshops and focus groups. The study will be driven
and validated by three key use cases illustrating the benefits of assistive technologies for this population
[36]: daily self-management (e.g., task automation, smart home personalization), skill development (e.g.,
task guidance, creative expression), and social participation (e.g., communication, emotional support).</p>
      <p>Evaluation will follow a multi-stage, mixed-methods approach [37], combining short-term usability
testing with long-term user experience tracking. Data will be gathered through qualitative methods (e.g.,
interviews, contextual observation, software logs [ 38]), and multimodal techniques (e.g., eye-tracking,
speech emotion analysis, gesture recognition), alongside quantitative metrics like task completion rates
[39], negotiation success rates (e.g., F1 scores [26]), and user satisfaction (e.g., UEQ scale [40]). The
collected data will be analyzed using thematic analysis, statistical methods, and triangulation, with
comparative studies validating solution efectiveness against existing approaches.</p>
      <p>Throughout the research, a lessons-learned approach will identify challenges and opportunities from
prototype development and user studies, distilling them into design patterns and toolkits that can favor
the replicability and applicability of the acquired knowledge in independent design contexts.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Expected Results</title>
      <p>
        The research will deliver an iterative negotiation framework for human–AI collaboration, integrating
user modeling, scenario-based prompt strategies, and multimodal interaction paradigms. The framework
is designed to be tailored to the abilities and contextual needs of individuals with ID, enabling them to
control interconnected services, apps, and devices at an appropriate level of abstraction through
intentbased natural language interactions with LLM-powered agents. The goal is to resolve ambiguities and
conflicts through user-centered automated negotiations [ 41], creating smarter, more inclusive, and fluid
interaction experiences. Furthermore, this research will complement and extend the W3C WCAG [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]
with interface design guidelines that shift accessibility from static compliance and one-way information
delivery toward dynamic empowerment through collaborative, two-way interaction. The outcomes
will also inform interaction model toolkits and define the functional and architectural requirements for
digital systems and end-user development platforms [33], supporting the full process from requirements
specification to system deployment.
      </p>
      <p>From a broader societal perspective, this research aims to discover the potential of integrating AI into
daily life while considering the needs and complexities of the communities they are intended to serve. It
promotes the flexible, cross-domain application of natural language technologies in assistive intelligent
interaction [25] and mitigates the inequalities that techno-solutionism can generate [42]. The proposed
framework will indeed enhance system controllability and digital autonomy for users with ID and
other vulnerable populations, with potential benefits in language training, cognitive enhancement, and
mental well-being.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion</title>
      <p>Individuals with ID face significant challenges interacting with digital systems. Although LLMs ofer
promising natural language capabilities, current systems still struggle with ambiguity, conflicting
user preferences, and flawed outputs. This research will investigate integrating iterative negotiation
strategies and multimodal interactions (e.g., visual symbols, speech, gestures) into LLM-driven agents.
These combined approaches aim to enhance system usability and accessibility, empowering users
with ID to engage in intent-based tasks, thereby improving their autonomy and quality of life. The
primary expected outcome is a negotiation framework featuring personalized and intelligent interaction
experiences for individuals with ID and broader vulnerable groups. The research will also deliver
guidelines, toolkits, and system architecture requirements to promote digital inclusion and well-being.</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the author used ChatGPT and Grammarly in order to: Grammar
and spelling check, Paraphrase, and reword. After using this tool/service, the author reviewed and
edited the content as needed and takes full responsibility for the publication’s content.
[14] Y. Liu, T. Le-Cong, R. Widyasari, C. Tantithamthavorn, L. Li, X.-B. D. Le, D. Lo, Refining
chatgptgenerated code: Characterizing and mitigating code quality issues, TOSEM 33 (2024) 1–26.
[15] N. Lainwright, M. Pemberton, Assessing the response strategies of large language models under
uncertainty: A comparative study using prompt engineering, OSF Preprints 1 (2024).
[16] K. Keyvan, J. X. Huang, How to approach ambiguous queries in conversational search: A survey
of techniques, approaches, tools, and challenges, CSUR 55 (2022) 1–40.
[17] G. Desolda, C. Ardito, M. Matera, Empowering end users to customize their smart environments:
model, composition paradigms, and domain-specific tools, TOCHI 24 (2017) 1–52.
[18] D. Dippold, “can i have the scan on tuesday?” user repair in interaction with a task-oriented
chatbot and the question of communication skills for ai, Journal of Pragmatics 204 (2023) 21–32.
[19] J. D. Zamfirescu-Pereira, R. Y. Wong, B. Hartmann, Q. Yang, Why johnny can’t prompt: how
non-ai experts try (and fail) to design llm prompts, in: Proc. of CHI, 2023, pp. 1–21.
[20] B. Ur, E. McManus, M. Pak Yong Ho, M. L. Littman, Practical trigger-action programming in the
smart home, in: Proc. of CHI, 2014, pp. 803–812.
[21] F. Catania, M. Spitale, F. Garzotto, Conversational agents in therapeutic interventions for
neurodevelopmental disorders: a survey, CSUR 55 (2023) 1–34.
[22] M. K. Wolters, F. Kelly, J. Kilgour, Designing a spoken dialogue interface to an intelligent cognitive
assistant for people with dementia, Health Inform. J. 22 (2016) 854–866.
[23] M. Spitale, S. Silleresi, F. Garzotto, M. J. Matarić, Using socially assistive robots in speech-language
therapy for children with language impairments, Int. J. Soc. Robot. 15 (2023) 1525–1542.
[24] D. Morra, G. Caslini, M. Mores, F. Garzotto, M. Matera, Makenodes: Opening connected-iot
making to people with intellectual disability, IJHCS 190 (2024) 103325.
[25] M. C. Safari, S. Wass, E. Thygesen, Motivation of people with intellectual disabilities in technology
design activities: the role of autonomy, competence, and relatedness, BIT 42 (2023) 89–107.
[26] K. Chawla, J. Ramirez, R. Clever, et al., CaSiNo: A corpus of campsite negotiation dialogues for
automatic negotiation systems, Proc. NAACL-HLT 2021 1 (2021) 3167–3185.
[27] Z. Zhang, L. Liao, X. Zhu, T.-S. Chua, Z. Liu, Y. Huang, M. Huang, Learning goal-oriented dialogue
policy with opposite agent awareness, arXiv preprint arXiv:2004.09731 (2020).
[28] H. Zhan, Y. Wang, Z. Li, et al., Let‘s negotiate! a survey of negotiation dialogue systems, in:</p>
      <p>Y. Graham, M. Purver (Eds.), Findings of EACL 2024, ACL, St. Julian’s, Malta, 2024, pp. 2019–2031.
[29] M. L. Fransen, E. G. Smit, P. W. Verlegh, Strategies and motives for resistance to persuasion: An
integrative framework, Frontiers in Psychology 6 (2015) 1201.
[30] K. Georgila, C. Nelson, D. Traum, Single-agent vs. multi-agent techniques for concurrent
reinforcement learning of negotiation dialogue policies, in: Proc. ACL, 2014, pp. 500–510.
[31] J. Li, M. Liu, Z. Zheng, et al., Dadgraph: A discourse-aware dialogue graph neural network for
multiparty dialogue machine reading comprehension, in: Proc. IJCNN, IEEE, 2021, pp. 1–8.
[32] J. Brett, L. Thompson, Negotiation, OBHDP 136 (2016) 68–79.
[33] H. Lieberman, F. Paternò, M. Klann, V. Wulf, End-user development: An emerging paradigm, in:</p>
      <p>End user development, Springer, 2006, pp. 1–8.
[34] J. Zimmerman, J. Forlizzi, S. Evenson, Research through design as a method for interaction design
research in hci, in: Proc. of CHI, 2007, pp. 493–502.
[35] M. J. Muller, S. Kuhn, Participatory design, CACM 36 (1993) 24–28.
[36] A. Klavina, P. Pérez-Fuster, J. Daems, et al., The use of assistive technology to promote practical
skills in persons with autism spectrum disorder and intellectual disabilities: A systematic review,
Digital Health 10 (2024) 20552076241281260.
[37] J. Brannen, Mixing methods: The entry of qualitative and quantitative approaches into the research
process, Int. J. Soc. Res. Methodol. 8 (2005) 173–184.
[38] D. M. Hilbert, D. F. Redmiles, Extracting usability information from user interface events, CSUR
32 (2000) 384–421.
[39] B. Albert, T. Tullis, Measuring the user experience: collecting, analyzing, and presenting usability
metrics, Newnes, 2013.
[40] M. Schrepp, A. Hinderks, J. Thomaschewski, Construction of a benchmark for the user experience
questionnaire (ueq), Int. J. Interact. Multimed. Artif. Intell. 4 (2017) 40–44.
[41] F. Paternø, M. Burnett, G. Fischer, et al., Artificial intelligence versus end-user development: a
panel on what are the tradeofs in daily automations?, in: Interact, Springer, 2021, pp. 340–343.
[42] D. Wang, R. Denton, A. K. Sinha, S. e. a. Sheth, Unanticipated lessons from communities: Navigating
society-centeredresearch in the ai era, in: Proc. of CHI EA ’25, ACM, New York, NY, USA, 2025.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A. P.</given-names>
            <surname>Association</surname>
          </string-name>
          , et al.,
          <article-title>Diagnostic and statistical manual of mental disorders: DSM-5</article-title>
          , American psychiatric association,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A. W.</given-names>
            <surname>Wong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. C.</given-names>
            <surname>Chan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. W.</given-names>
            <surname>Li-Tsang</surname>
          </string-name>
          , et al.,
          <article-title>Competence of people with intellectual disabilities on using human-computer interface</article-title>
          ,
          <source>Research in Developmental Disabilities</source>
          <volume>30</volume>
          (
          <year>2009</year>
          )
          <fpage>107</fpage>
          -
          <lpage>123</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <issue>W3C</issue>
          ,
          <article-title>Web content accessibility guidelines (wcag) 2.1</article-title>
          , https://www.w3.org/TR/WCAG21/,
          <year>2018</year>
          . Accessed:
          <fpage>2025</fpage>
          -07-14.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>D.</given-names>
            <surname>Lussier-Desrochers</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. L.</given-names>
            <surname>Normand</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Romero-Torres</surname>
          </string-name>
          , et al.,
          <article-title>Bridging the digital divide for people with intellectual disability</article-title>
          ,
          <source>Cyberpsychology: Journal of Psychosocial Research on Cyberspace</source>
          <volume>11</volume>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Ding</surname>
          </string-name>
          ,
          <article-title>Towards intent-based user interfaces: Charting the design space of intent-ai interactions across task types</article-title>
          ,
          <source>arXiv preprint arXiv:2404.18196</source>
          (
          <year>2024</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>B.</given-names>
            <surname>Shneiderman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Maes</surname>
          </string-name>
          , Direct manipulation vs.
          <source>interface agents, interactions 4</source>
          (
          <year>1997</year>
          )
          <fpage>42</fpage>
          -
          <lpage>61</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>R.</given-names>
            <surname>Sheehan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Hassiotis</surname>
          </string-name>
          ,
          <article-title>Digital mental health and intellectual disabilities: state of the evidence and future directions</article-title>
          ,
          <source>BMJ Ment Health</source>
          <volume>20</volume>
          (
          <year>2017</year>
          )
          <fpage>107</fpage>
          -
          <lpage>111</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>T.</given-names>
            <surname>Calò</surname>
          </string-name>
          , L. De Russis,
          <article-title>Enhancing smart home interaction through multimodal command disambiguation</article-title>
          ,
          <source>Personal and Ubiquitous Computing</source>
          <volume>28</volume>
          (
          <year>2024</year>
          )
          <fpage>985</fpage>
          -
          <lpage>1000</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>S. R.</given-names>
            <surname>Petrick</surname>
          </string-name>
          ,
          <article-title>On natural language based computer systems</article-title>
          ,
          <source>IBM J. Res. Dev</source>
          .
          <volume>20</volume>
          (
          <year>1976</year>
          )
          <fpage>314</fpage>
          -
          <lpage>325</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>S.</given-names>
            <surname>Ouyang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Harman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>Llm is like a box of chocolates: the non-determinism of chatgpt in code generation</article-title>
          , arXiv e-prints (
          <year>2023</year>
          ) arXiv-
          <fpage>2308</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>G. L.</given-names>
            <surname>Scoccia</surname>
          </string-name>
          ,
          <article-title>Exploring early adopters' perceptions of chatgpt as a code generation tool</article-title>
          , in: ASEW, IEEE,
          <year>2023</year>
          , pp.
          <fpage>88</fpage>
          -
          <lpage>93</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>A.</given-names>
            <surname>Stampf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Colley</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Girst</surname>
          </string-name>
          , E. Rukzio,
          <article-title>Exploring passenger-automated vehicle negotiation utilizing large language models for natural interaction</article-title>
          ,
          <source>in: Proc. AutomotiveUI</source>
          ,
          <year>2024</year>
          , pp.
          <fpage>350</fpage>
          -
          <lpage>362</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>M.</given-names>
            <surname>Andrao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Morra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Paccosi</surname>
          </string-name>
          , et al.,
          <article-title>” this sounds unclear”: Evaluating chatgpt capability in translating end-user prompts into ready-to-deploy python code</article-title>
          .,
          <source>in: Proc. AVI</source>
          ,
          <year>2024</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>4</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>