<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Figmant - A Plugin for End-User Development of Wizard of Oz Experiments for Human-AI Interaction Design</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Tommaso Turchi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Francesco Faenza</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alessio Malizia</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Giacomo Bosio</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Nicoletta Bruno</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Science, University of Pisa</institution>
          ,
          <addr-line>Pisa</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Hedron</institution>
          ,
          <addr-line>Livorno</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Molde University College</institution>
          ,
          <addr-line>Molde</addr-line>
          ,
          <country country="NO">Norway</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>This paper introduces Figmant, a Figma plugin that enables designers to conduct Wizard of Oz (WoZ) experiments for evaluating novel human-AI interactions without requiring actual AI implementation. Figmant extends the familiar Figma environment to allow designers to rapidly prototype, test, and iterate on AI-driven interfaces. The plugin provides a dual-interface system: one for participants interacting with the simulated AI, and another for the “wizard” to control the simulated AI responses. Built using react-figma, Figmant democratizes the prototyping of AI interactions, empowering designers with limited technical expertise to participate fully in the design of AI-driven user experiences. The plugin enables designers to simulate a wide range of AI behaviors through component state manipulation, dynamic content changes, and contextual frame navigation - all without requiring programming knowledge.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;end-user development</kwd>
        <kwd>wizard of oz</kwd>
        <kwd>prototyping</kwd>
        <kwd>human-AI interaction</kwd>
        <kwd>figma plugin</kwd>
        <kwd>design tools</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>As Artificial Intelligence (AI) becomes increasingly integrated into everyday digital products, designers
face a growing challenge: they must design interactions for AI-driven systems without necessarily
having the technical expertise to implement those systems. This creates a gap in the design process
where designers rely on developers to implement their designs, often leading to a disconnect between
design intent and implementation reality.</p>
      <p>
        The Wizard of Oz (WoZ) methodology [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] ofers a potential solution, allowing designers to simulate
AI behavior during user testing without requiring actual AI implementation. However, existing WoZ
platforms are typically custom-built for specific domains [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], requiring technical expertise that many
designers lack.
      </p>
      <p>
        This paper presents Figmant, a Figma plugin that enables designers to conduct WoZ experiments
directly within their familiar design environment. By lowering the technical barriers to conducting
WoZ tests, Figmant empowers designers to take a more active role in the development of AI-driven
interfaces, aligning with the core principles of End-User Development (EUD) [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Unlike
generalpurpose WoZ platforms that require extensive configuration, Figmant integrates directly with Figma’s
design components and interactive features, allowing designers to simulate intelligent behaviors by
manipulating prototypes in ways that participants perceive as AI-driven interactions.
      </p>
      <p>The significance of this problem is underscored by the increasing prevalence of AI in digital products
and the corresponding need for designers to rapidly explore, test, and iterate on AI-driven interactions.
Without accessible tools, designers are often unable to validate their ideas or gather user feedback
early, leading to missed opportunities for innovation and a disconnect between design intent and
implementation. By addressing this gap, Figmant aims to empower a broader range of practitioners
to participate in the design of intelligent systems, ultimately improving the quality and inclusivity of
AI-driven user experiences.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>Our research builds upon three main areas of prior work: Wizard of Oz methodologies, end-user
development for AI, and design tool extensions.</p>
      <sec id="sec-2-1">
        <title>2.1. Wizard of Oz for AI Interface Design</title>
        <p>
          The Wizard of Oz technique has been widely used to prototype and evaluate novel interfaces before
their technical implementation. Klemmer et al. [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ] developed Suede, a prototyping tool for speech
interfaces using the WoZ approach. More recently, Hu et al. [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ] introduced Wizundry, a platform
supporting multiple wizards collaborating to simulate more sophisticated speech-based interfaces.
        </p>
        <p>These systems demonstrate the value of WoZ methodologies for exploring novel interaction
paradigms, but they typically require specialized platforms separate from designers’ usual workflows.
While Wizundry provides powerful collaboration features for speech-based interfaces with multiple
wizards, Figmant takes a diferent approach by integrating WoZ capabilities directly into Figma, focusing
on visual interface manipulation within a tool already familiar to designers.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. End-User Development for AI</title>
        <p>
          End-User Development (EUD) provides a foundation for empowering non-programmers to shape and
adapt digital systems to their needs. The meta-design framework, as articulated by Fischer et al. [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ],
emphasizes the creation of environments where users are not just passive consumers but active
codesigners, able to extend and reconfigure systems. This approach is particularly relevant for AI, where
the complexity and opacity of underlying algorithms often create barriers for designers and other
stakeholders. By applying meta-design principles, tools can be developed that lower these barriers,
enabling designers to experiment with and test possible AI behaviors without requiring deep technical
expertise. This aligns with the broader EUD vision described by Lieberman et al. [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ], which advocates
for making systems modifiable by end users. Recent work by Dove et al. [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] further highlights the
challenges faced by UX practitioners in prototyping and envisioning AI-driven experiences. Figmant
builds on these insights by providing a meta-design environment within Figma, allowing designers to
simulate, adapt, and iterate on AI interactions as part of their regular workflow, thus operationalizing
EUD principles in the context of human-AI interaction design.
        </p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Design Tool Extensions</title>
        <p>
          The extension of design tools to support new capabilities has been explored in various contexts. Myers
and Stylos [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] addressed API usability, highlighting the importance of providing appropriate abstractions
for diferent user groups. In the context of voice interfaces, Cambre and Kulkarni [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] discussed the
challenges of designing for voice and the need for better prototyping tools.
        </p>
        <p>Figmant builds on these insights by extending Figma — a widely used design tool — with capabilities
specifically tailored to the needs of designers working on AI interfaces. By leveraging Figma’s existing
component system and adding a layer for WoZ simulation, Figmant provides a seamless extension to
designers’ existing workflows.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Figmant System Design</title>
      <p>Figmant is designed as a Figma plugin that enables designers to conduct Wizard of Oz experiments for
AI interfaces. The plugin extends Figma’s capabilities to support the specific needs of simulating AI
interactions while maintaining the familiar design environment that designers already use. Below we
describe the key components and features of our system.</p>
      <sec id="sec-3-1">
        <title>3.1. Plugin Architecture</title>
        <p>Figmant is built using react-figma, allowing for seamless integration with the Figma environment.
The plugin leverages Figma’s API to access and manipulate design elements while maintaining a clear
separation between design assets and the WoZ experiment configuration. This separation ensures that
designers can continue to iterate on their designs without disrupting experiment setups.</p>
        <p>The plugin architecture consists of three main interfaces:
1. The Designer Interface: Allows designers to configure the WoZ experiment, designate which
elements can be manipulated by the wizard, and define the range of possible responses.
2. The Participant Interface: A clean view of the prototype that participants interact with,
believing they are interacting with an AI system.
3. The Wizard Interface: A control panel that allows the human "wizard" to observe participant
actions and trigger appropriate responses in real-time.</p>
        <p>The plugin maintains a synchronized state between these interfaces, ensuring that actions in the
wizard interface are immediately reflected in the participant interface.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Experiment Configuration</title>
        <p>Setting up a WoZ experiment in Figmant involves defining interaction points within the Figma prototype
and associating them with possible AI responses. Designers can specify:
• Trigger Components: UI elements that participants can interact with
• Response Options: A library of possible responses for each trigger
• Response Timing: Simulated processing time to maintain the illusion of AI processing
• Recording Settings: Options for capturing interaction data during the experiment</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Wizard Interface Capabilities</title>
        <p>To further illustrate the unique capabilities of Figmant, we provide a detailed walkthrough of the Wizard
Interface functionalities:
• Live Component State Control: The wizard can instantly toggle between predefined states
of any interactive component (e.g., switching a chatbot card from “waiting” to “responded”),
enabling dynamic, context-sensitive UI updates.
• Frame Navigation: The wizard can navigate the participant’s view to any frame in the prototype,
simulating context-aware or multi-step AI workflows.
• Dynamic Content Insertion: The wizard can insert or modify text, images, or other content in
real time, simulating AI-generated responses or suggestions.
• Pattern Recognition Simulation: The wizard can trigger specific responses based on observed
participant actions, mimicking AI recognition of user intent or behavior patterns.
• Simulated Processing Delays: The wizard can introduce artificial delays before responses
appear, maintaining the illusion of AI computation.
• Error and Fallback States: The wizard can trigger error messages or fallback options, allowing
designers to test how users respond to system limitations.
• Interaction Logging: All participant actions and wizard interventions can be recorded for later
analysis, supporting iterative design and evaluation.</p>
        <p>This set of features allows designers to simulate a wide range of AI behaviors and user experiences,
even in the absence of actual AI backends.</p>
      </sec>
      <sec id="sec-3-4">
        <title>3.4. Design Principles</title>
        <p>
          The development of Figmant was guided by user-centered design and meta-design principles.
Usercentered design [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ] ensured that the needs, workflows, and pain points of designers were central
throughout the process, while meta-design [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ] informed the creation of an environment that empowers
users to adapt and extend the tool to their specific contexts. By embedding these frameworks into the
plugin’s architecture, we aimed to maximize accessibility, flexibility, and the potential for end-user
innovation.
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Example Use Cases and Anticipated Benefits</title>
      <p>To illustrate how Figmant enables designers to prototype and test AI-driven interfaces, we present
several example use cases along with the anticipated benefits and challenges of our approach.</p>
      <sec id="sec-4-1">
        <title>4.1. Example Applications</title>
        <p>Conversational AI Interface A designer testing a chatbot interface that generates personalized
recommendations can use Figmant to create diferent component states representing various response
types, observe user inputs through the Wizard Interface, select appropriate responses (see Figure 1),
trigger typing indicators with timing delays to simulate natural conversation flow, and navigate to
specific frames representing diferent conversation stages. Additionally, designers can prepare multiple
alternative dialog flows and test them with participants to explore diferent conversational strategies.
AI-Driven Content Curation For testing an interface that uses AI to curate content based on
user interests, the wizard can observe user behavior, dynamically swap content components to show
personalized recommendations, simulate learning over time by gradually refining content selection,
and trigger diferent states showing varying confidence levels.</p>
        <p>Context-Aware Assistant For an AI assistant that responds to contextual cues, the wizard can
change suggested actions based on the user’s task state, navigate between contextual frames based
on inferred intent, manipulate component states to reflect diferent assistance levels, and simulate
proactive suggestions by triggering notifications at appropriate moments.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Expected Benefits</title>
        <p>Figmant’s integration with Figma should significantly reduce technical barriers, allowing designers with
no prior AI development experience to create and test interaction prototypes after minimal training.
This tight integration should also increase iteration speed, as designers can quickly modify prototypes
and immediately test changes with participants. Additionally, the low cost of creating and testing varied
AI behaviors should lead to broader exploration of design possibilities.</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Anticipated Challenges</title>
        <p>Despite these benefits, several challenges remain. Maintaining perfect synchronization between wizard
and participant interfaces in complex scenarios will be technically demanding. The system may
face performance limitations with many possible AI responses or complex UI states. Perhaps most
significantly, managing multiple interaction points simultaneously could place substantial cognitive
demand on wizards, particularly when simulating sophisticated AI systems.</p>
      </sec>
      <sec id="sec-4-4">
        <title>4.4. Workflow and Outcomes</title>
        <p>Figmant automatically records all interactions between participants and the wizard during each Wizard
of Oz experiment session. This includes user actions, wizard-triggered responses, and any changes
made to the prototype in real time. The resulting interaction logs can be exported for further analysis,
enabling designers and researchers to review user behavior, identify usability issues, and evaluate the
efectiveness of simulated AI interactions. This data-driven approach supports iterative refinement of
both the prototype and the experiment setup, helping teams make evidence-based design decisions and
improve the overall user experience.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. User Involvement and Evaluation</title>
      <p>While a formal user study has not yet been conducted, we plan to evaluate Figmant through a series of
user-centered studies involving professional designers and design students. These studies will focus on
usability, efectiveness, and the impact of Figmant on the prototyping workflow compared to traditional
methods. We will employ a combination of observation, interviews, and analysis of interaction logs to
assess how easily users can configure and run WoZ experiments, as well as the quality of the resulting
prototypes. Feedback from these studies will directly inform future development, with the goal of
further lowering barriers to entry and enhancing collaborative and semi-automated wizarding features.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion</title>
      <p>Figmant represents a step toward democratizing the design of AI interactions by leveraging the principles
of End-User Development. By integrating WoZ methodology directly into designers’ existing tools, we
reduce the technical barriers to prototyping AI interactions while maintaining the flexibility needed to
explore innovative design directions.</p>
      <p>We believe this approach can significantly improve designers’ ability to prototype and test
AIdriven interfaces without requiring deep technical expertise. The ability to manipulate component
states, navigate between frames, and dynamically update content should prove especially valuable for
simulating AI behaviors that would otherwise require complex implementation.</p>
      <p>The importance of this work to the EUD community lies in its focus on empowering designers —
end-users of design tools but not typically developers — to create functional prototypes of AI systems.
By bringing the power of WoZ methodology into familiar design environments, Figmant exemplifies
how end-user development principles can be applied to emerging technology domains.</p>
      <p>
        As we continue to develop Figmant, our future work will focus on several key areas: implementing
semi-automated response capabilities with conditional logic to reduce wizard cognitive load; extending
the platform to support multiple wizards collaborating to simulate more complex AI systems, inspired by
Hu et al.’s [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] work; enhancing manipulation capabilities with animation controls and multimodal input
processing; and exploring transitions from wizard-simulated behaviors to actual AI implementations,
creating a bridge between early WoZ testing and later-stage AI development.
      </p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgments</title>
      <p>This work was produced with the co-funding of the European Union – Next Generation EU, in the
context of The National Recovery and Resilience Plan, Investment 1.5 Ecosystems of Innovation, Project
Tuscany Health Ecosystem (THE), ECS00000017. Spoke 3.</p>
    </sec>
    <sec id="sec-8">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the authors used ChatGPT and Grammarly for grammar and
spelling checks, paraphrasing, and rewording. After employing these services, the authors reviewed
and edited the content as needed and take full responsibility for the publication’s content.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>N.</given-names>
            <surname>Dahlbäck</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Jönsson</surname>
          </string-name>
          , L. Ahrenberg,
          <article-title>Wizard of oz studies - why and how</article-title>
          ,
          <source>Know.- Based Syst</source>
          .
          <volume>6</volume>
          (
          <year>1993</year>
          )
          <fpage>258</fpage>
          -
          <lpage>266</lpage>
          . URL: https://doi.org/10.1016/
          <fpage>0950</fpage>
          -
          <lpage>7051</lpage>
          (
          <issue>93</issue>
          )
          <fpage>90017</fpage>
          -
          <lpage>N</lpage>
          . doi:
          <volume>10</volume>
          .1016/
          <fpage>0950</fpage>
          -
          <lpage>7051</lpage>
          (
          <issue>93</issue>
          )
          <fpage>90017</fpage>
          -
          <lpage>N</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>S.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. C.</given-names>
            <surname>Yen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Seaborn</surname>
          </string-name>
          , C. Liu,
          <article-title>Wizundry: A cooperative wizard of oz platform for simulating future speech-based interfaces with multiple wizards</article-title>
          ,
          <source>Proc. ACM Hum.-Comput. Interact</source>
          .
          <volume>7</volume>
          (
          <year>2023</year>
          ). URL: https://doi.org/10.1145/3579591. doi:
          <volume>10</volume>
          .1145/3579591.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>H.</given-names>
            <surname>Lieberman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Paternò</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Klann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Wulf</surname>
          </string-name>
          ,
          <string-name>
            <surname>End-User Development</surname>
          </string-name>
          : An Emerging Paradigm, Springer Netherlands, Dordrecht,
          <year>2006</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          . URL: https://doi.org/10.1007/1-4020-5386-X_
          <article-title>1</article-title>
          . doi:
          <volume>10</volume>
          .1007/1-4020-5386-X_
          <fpage>1</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>S. R.</given-names>
            <surname>Klemmer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. K.</given-names>
            <surname>Sinha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Landay</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Aboobaker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>Suede: a wizard of oz prototyping tool for speech user interfaces</article-title>
          ,
          <source>in: Proceedings of the 13th Annual ACM Symposium on User Interface Software and Technology, UIST '00</source>
          ,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2000</year>
          , p.
          <fpage>1</fpage>
          -
          <lpage>10</lpage>
          . URL: https://doi.org/10.1145/354401.354406. doi:
          <volume>10</volume>
          .1145/354401. 354406.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>G.</given-names>
            <surname>Fischer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Giaccardi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Ye</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. G.</given-names>
            <surname>Sutclife</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Mehandjiev</surname>
          </string-name>
          ,
          <article-title>Meta-design: a manifesto for enduser development</article-title>
          ,
          <source>Commun. ACM</source>
          <volume>47</volume>
          (
          <year>2004</year>
          )
          <fpage>33</fpage>
          -
          <lpage>37</lpage>
          . URL: https://doi.org/10.1145/1015864.1015884. doi:
          <volume>10</volume>
          .1145/1015864.1015884.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>G.</given-names>
            <surname>Dove</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Halskov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Forlizzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zimmerman</surname>
          </string-name>
          ,
          <article-title>Ux design innovation: Challenges for working with machine learning as a design material</article-title>
          ,
          <source>in: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, CHI '17</source>
          ,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY, USA,
          <year>2017</year>
          , p.
          <fpage>278</fpage>
          -
          <lpage>288</lpage>
          . URL: https://doi.org/10.1145/3025453.3025739. doi:
          <volume>10</volume>
          .1145/3025453.3025739.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>B. A.</given-names>
            <surname>Myers</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Stylos</surname>
          </string-name>
          , Improving api usability,
          <source>Commun. ACM</source>
          <volume>59</volume>
          (
          <year>2016</year>
          )
          <fpage>62</fpage>
          -
          <lpage>69</lpage>
          . URL: https: //doi.org/10.1145/2896587. doi:
          <volume>10</volume>
          .1145/2896587.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>J.</given-names>
            <surname>Cambre</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Kulkarni</surname>
          </string-name>
          ,
          <article-title>One voice fits all? social implications and research challenges of designing voices for smart devices</article-title>
          ,
          <source>Proc. ACM Hum.-Comput. Interact</source>
          .
          <volume>3</volume>
          (
          <year>2019</year>
          ). URL: https://doi.org/10.1145/ 3359325. doi:
          <volume>10</volume>
          .1145/3359325.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>R. D.</given-names>
            <surname>Pea</surname>
          </string-name>
          ,
          <article-title>User centered system design: new perspectives on human-computer interaction</article-title>
          ,
          <source>Journal educational computing research 3</source>
          (
          <year>1987</year>
          )
          <fpage>129</fpage>
          -
          <lpage>134</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>