<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Turin, Italy
nEvelop-O ludovica.piro@polimi.it (L. Piro)
rOcid</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Conversational AI for Web Inclusivity: Technologies, Design Patterns and Development Toolkits</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Ludovica Piro</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Politecnico di Milano</institution>
          ,
          <addr-line>DEIB, Piazza Leonardo da Vinci, 32, Milano, 20133</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>Digital services can represent an important channel for granting access to knowledge, education, and work. Expecially to the citizens living with disabilities. However, as of now, the Web is conceived essentially for visual fruition and is inadequate for all those users living with permanent or situational impairments. Conversational AI is emerging as a technology apt for the development of inclusive and accessible applications, but there is still a lack of guidance specific to the design of inclusive Conversational AI systems. This research proposes to identify guidelines, interaction patterns, and enabling technology for a new paradigm for accessible conversational web browsing.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Software accessibility</kwd>
        <kwd>card tools</kwd>
        <kwd>conversational interfaces</kwd>
        <kwd>HCI design and evaluation methods</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Digital inclusion is a primary right for all citizens. Digital services, in fact, can represent
an important channel for granting access to knowledge, education, and work. Thus, the
development of accessible digital services, will become essential to guarantee the inclusion, and
right to access information to every member of society, most notably to citizens with disabilities.
However, right now, the Web is conceived for visual fruition and is inadequate for those living
with permanent or with situational impairments. Despite providing a level of support, assistive
technologies, such as screen readers, cannot always grant proper assistance because of the still
limited compliance of websites to accessibility guidelines.</p>
      <p>
        Conversational AI (CAI) is emerging as a technology for inclusive interaction with digital
services, as it can provide an interaction paradigm that is independent of the visual channel.
Still, there is a lack of widely accepted guidelines, methodology, and development platforms to
guide developers and designers in delivering efective conversational interfaces that are also
fully accessible to disabled users. Recent work explores ways to leverage CAI to augment the
web. ConWeb[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] is a platform ofering a browser extension that enables users to browse the
web through voice interaction. The ConWeb browsing paradigm was developed with Blind and
Visually Impaired (BVI) users to define interaction patterns specific to their needs.
      </p>
      <p>In this context, my research will investigate how CAI can improve the accessibility of digital
services. It will address this research question: which are the guidelines, interaction patterns,
enabling technologies needed for a new paradigm for conversational web browsing? The resulting
methodological and technological framework will support the design of websites that can be
accessed through Natural Language interaction.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Works</title>
      <p>Conversational user interfaces (CUI) are a broad class of interfaces that often refer to chatbots
and voice user interfaces (VUI). The interaction is generally a two-way dialogue between
humans and machines. Alexa, Siri, Microsoft Cortana, and Google Assistants are examples of
voice-enabled intelligent assistants with which users can interact to perform daily tasks.</p>
      <p>
        In the context of web navigation, non-visual navigation methods have already been explored
in the HCI literature through diferent approaches, such as web augmentation or end-user
programming[
        <xref ref-type="bibr" rid="ref2 ref3 ref4">2, 3, 4</xref>
        ]. Other approaches to make the Web more accessible to BVI users include
segmentation: Borodin et al. [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] introduce a multi-modal browser that leverages segmentation
techniques to propose to users only the relevant parts of a webpage. Cambre et al., instead,
proposed Firefox Voice[
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], a browser assistant that enabled users to perform tasks on the browser
through voice interaction. Through voice commands, it could perform both tasks at the webpage
level and that of the browser application. Ripa et al.[
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] present an end-user development
environment approach that leverages semantic annotation to divide the page into relevant
content blocks and generate voice assistants. Others propose the idea of a Conversational
Web. ConWeb[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] is a browser extension that acts as a middleware between the Web client and
Web servers. The page is parsed and used to generate the dialogue necessary to answer users’
requests. It uses an NLP pipeline and a headless browser to extract user intents and entities and
transform the requests into actions on the webpage respectively. However, this solution still
has some limitations as the actions are limited only to reading text and link navigation. Indeed
these methods, while they provide new technical solutions for non-visual navigation, do not
provide practitioners with generalised development for a Web that is conversational by design.
      </p>
      <p>
        In HCI research, guidelines help address critical factors for the development of new design
solutions, such as accessibility or usability. With the increasing popularity of conversational
interfaces (CUI), it also emerged the need for specific guidelines aimed at helping their design.
Nielsen heuristics[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] have been applied to CUI[
        <xref ref-type="bibr" rid="ref8 ref9">8, 9</xref>
        ]. Still, both industry and academic HCI
are researching ways to codify design knowledge regarding CUI, in the form of heuristics,
best practices and so on. For example, Murad et al.[
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], starting from an analysis of Nielsen,
Norman[
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] and Shneiderman[
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] guidelines, adds a set of principles specifically for VUI to
ensure transparency and taking context into account. Microsoft[
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], IBM[
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], and Google have
also published guidelines for the design of voice assistants. However, concerns have been raised
regarding their validity for BVI users[
        <xref ref-type="bibr" rid="ref15">15</xref>
        ].
      </p>
      <p>
        BVI users have been found to perceive commercial voice assistants as verbose and not very
helpful. To address these issues, Branham et al. [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] discusses design implications to make
voice assistants more accessible. They advocate for more personalization, both in the sense
of customizable voice commands and customizable interaction preferences, to modify the
length of turns during a conversation or speech speed. Corbett et al. [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ], instead, provides
a case study to highlight navigation challenges and define two design principles to increase
discoverability and learnability by contextualised help and training. Lastly, Pucci et al. [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]
propose a set of interaction patterns specifically researched for BVI users when interacting
with Web conversational agents. The proposed paradigm presents the webpage to users in
a tree-like model that goes beyond the sequential reading provided by screen readers, also
ofering summarization and skimming patterns.
      </p>
      <p>
        Despite all these eforts, guidelines for CUI that consider accessibility principles are still sparse
and may consider only some interaction aspects. Furthermore, practitioners share diferent
views regarding the need for generally shared guidelines for CUI, as it is a field that is still
evolving and may be subject to changes, still[
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]. Thus, consensus and general adoption of such
guidelines are yet to be achieved.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Contribution and Future Work</title>
      <p>In the first months of my Ph.D., I started analyzing card toolkits and guidelines available in the
academic context and the industry. Card-based design tools indeed support the translation of
theoretical research findings into design practices. They can be used to communicate methods,
frameworks, or theories. The analysis aimed at surveying the toolkits for CUI design, to
understand what are the reference principles currently followed by designers and developers.
Furthermore, by this analysis, I aimed to understand which toolkits currently guide the design
of inclusive conversational AI and how. To retrieve the toolkits relevant to inclusive design
and CAI, a Google search and a query on Google Scholar were performed with the keywords:
“inclusive cards”, “conversational AI cards”, and “responsible AI cards”.</p>
      <p>Through the search queries, ten decks of cards were identified. Cards not containing
guidelines relevant to CAI were excluded, leaving seven toolkits. The retrieved cards were mapped
across their domain, content, and ideal moment of use, as shown in Table 1. By “domain” it is
intended the field they were designed for, whether Inclusivity or AI. “Content” indicates the type
of information codified in the cards. Examples are problem statements to elicit ideas, methods
to suggest possible approaches, technologies to be used, or insights providing inspiration on a
particular domain or problem. Lastly the “ideal moment of use” indicates in which phase of the
design process it is best to employ the toolkit. The phases considered were: research, ideation,
prototyping, development, and evaluation.</p>
      <p>
        From the analysed sample, only three toolkits propose insights specific to AI: the AI Ideation
Cards[
        <xref ref-type="bibr" rid="ref24">24</xref>
        ], Responsible Bots[
        <xref ref-type="bibr" rid="ref25">25</xref>
        ], and the Human-AI Interaction guidelines[26]. The AI Ideation
Cards[
        <xref ref-type="bibr" rid="ref24">24</xref>
        ] provide scenarios and possible methods useful when designing digital products that
leverage AI. They cover a broad range of tasks that can be supported by AI, and also, more
specifically, by CAI, but they do not prioritize inclusivity in their definitions. This deck provides
diferent application scenarios for CAI and lists some relevant issues to consider when designing
an application based on CAI, such as privacy and inclusivity. However, since it supports only
the ideation phase, it lacks concrete guidelines for the development of CAI applications.
      </p>
      <p>
        Responsible Bots[
        <xref ref-type="bibr" rid="ref25">25</xref>
        ], proposed by Microsoft, instead, addresses the prototyping and
development phases. It presents a set of guidelines specific to the development of conversational
agents that are transparent and reliable. It also addresses inclusive design practices by referring
to the Inclusive Activities Cards[
        <xref ref-type="bibr" rid="ref23">23</xref>
        ], also by Microsoft.
      </p>
      <p>Lastly, the Human-AI Interaction guidelines[26] provide general principles for responsible
AI-based interactive systems. The guidelines were defined through a four-step process that
used heuristic evaluations to validate the guidelines. The Human-AI Interaction guidelines cover
all moments of interaction between humans and machines. Since they are conceived as general,
while the guidelines are also valid for CAI applications, they do not provide principles specific
to conversational interaction.</p>
      <p>
        The toolkits related to inclusivity[
        <xref ref-type="bibr" rid="ref20 ref21 ref22 ref23">22, 21, 20, 23</xref>
        ], instead, provide insights mostly useful for
the ideation stage. The provided design knowledge is meant to be a starting point to reflect
on problems faced by users with disabilities. Only Microsoft Inclusivity cards[
        <xref ref-type="bibr" rid="ref23">23</xref>
        ] cover also
other design phases, spanning from research, to ideation, to prototiping. This toolkit provides
principles for usability, error prevention, feedback on system status, and so on.
      </p>
      <p>From the toolkits’ analysis, it emerged that a methodological framework for the development
of inclusive Conversational AI applications is still lacking. In the AI domain, only the Responsible
bot toolkit addresses the topic of inclusivity, but just by referring to an external set of guidelines.
In the inclusive design domain, instead, with the exception of the Microsoft Inclusivity cards, the
toolkits seem to focus more on providing inspirational content to start the ideation phase, than
practical design guidelines.</p>
      <p>
        Secondly, if we specifically consider guidelines for CAI, the conducted analysis highlights
that the available CAI guidelines focus on how to present the conversational agent to users, for
example giving guidelines regarding the tone of voice, how to be transparent about what the
bot can do, or recover from a failed interaction. However, how to structure complex interactions
with web pages is still not fully addressed in the literature. Thus, starting from the patterns
identified by Pucci et. al [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ], my research will aim at expanding the current methodological
framework for CAI to define a new methodological and technological framework to drive the
development of CAI that are inclusive by design and can serve the fruition of web content
through voice interaction.
      </p>
      <p>To achieve these objectives, my research will also focus on expanding user-centred methods,
such as the User Centred Design (UCD) sprint [27], to include in ideation and testing phases
principles specific to inclusive design and CAI. The UCD sprint ofers a methodology to develop
new technology with a human-centred approach. The methodology is structured in three phases,
Discovery, Design and Reality check, that are meant to guide the participants in understanding
users and their needs. Currently, it does not include considerations specific for the design of AI
technologies, nor for technologies catered to marginalised groups.</p>
      <p>The methodological framework will be tested and validated in partnership with the
Municipality of Milan, to assess the validity of current CAI design guidelines and toolkits discussed in
this work and to identify extensions in the directions discussed in this paper. Using the
Municipality’s web resources as a case study, I will investigate the limits of current AI and inclusivity
guidelines, and how to expand them to support the development of accessible conversational
services. The aim will be to define guidelines for a methodological framework guiding the
design of CAI solutions addressing inclusivity.</p>
      <p>Lastly, as a relevant aspect, this research will also explore technical solutions, from
EndUser Development paradigms and platforms to the automatic generation of code, to support
developers in implementing websites that “by design” can also be equipped with a conversational
channel. Specifically, given the focus of my research on inclusive Web resources for the Public
Administration, a possible solution that will be evaluated will consist in the expansion of the Web
development kit proposed by the Designers Italia initiative[28]. The kit provides developers with
design guidelines and ready-to-use Web components to develop consistent Web applications
for the Public Administration. Future work will explore the feasibility of expanding this kit
through semantic tagging and related Web browser extensions for generating a dialogue system
for conversational Web browsing.
[26] S. Amershi, D. Weld, M. Vorvoreanu, A. Fourney, B. Nushi, P. Collisson, J. Suh, S. Iqbal, P. N.</p>
      <p>Bennett, K. Inkpen, J. Teevan, R. Kikin-Gil, E. Horvitz, Guidelines for human-ai interaction,
CHI ’19, ACM, New York, NY, USA, 2019, p. 1–13. doi:10.1145/3290605.3300233.
[27] M. Larusdottir, V. Roto, Å. Cajander, Introduction to user-centred design sprint, in:
C. Ardito, R. Lanzilotti, A. Malizia, H. Petrie, A. Piccinno, G. Desolda, K. Inkpen (Eds.),
Human-Computer Interaction – INTERACT 2021, Springer, Cham, 2021, pp. 253–256.
[28] Designers Italia, Designers italia, 2023. URL: https://designers.italia.it/.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M.</given-names>
            <surname>Baez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Daniel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Casati</surname>
          </string-name>
          ,
          <article-title>Conversational web interaction: Proposal of a dialog-based natural language interaction paradigm for the web</article-title>
          , in: Chatbot Research and Design: Third International Workshop, CONVERSATIONS 2019, Amsterdam, The Netherlands,
          <source>November 19-20</source>
          ,
          <year>2019</year>
          , Revised Selected Papers, Springer-Verlag, Berlin, Heidelberg,
          <year>2019</year>
          , p.
          <fpage>94</fpage>
          -
          <lpage>110</lpage>
          . doi:
          <volume>10</volume>
          .1007/978- 3-
          <fpage>030</fpage>
          - 39540-
          <issue>7</issue>
          _
          <fpage>7</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J. P.</given-names>
            <surname>Bigham</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Lau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Nichols</surname>
          </string-name>
          , Trailblazer:
          <article-title>Enabling blind users to blaze trails through the web</article-title>
          ,
          <source>in: Proceedings of the 14th International Conference on Intelligent User Interfaces</source>
          ,
          <source>IUI '09</source>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New York, NY, USA,
          <year>2009</year>
          , p.
          <fpage>177</fpage>
          -
          <lpage>186</lpage>
          . doi:
          <volume>10</volume>
          .1145/1502650.1502677.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>I.</given-names>
            <surname>Ramakrishnan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Ashok</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. M.</given-names>
            <surname>Billah</surname>
          </string-name>
          ,
          <article-title>Non-visual web browsing: Beyond web accessibility</article-title>
          , volume
          <volume>10278</volume>
          ,
          <year>2017</year>
          , pp.
          <fpage>322</fpage>
          -
          <lpage>334</lpage>
          . doi:
          <volume>10</volume>
          .1007/978- 3-
          <fpage>319</fpage>
          - 58703- 5_
          <fpage>24</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>G.</given-names>
            <surname>Ripa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Torre</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Firmenich</surname>
          </string-name>
          , G. Rossi,
          <source>End-User Development of Voice User Interfaces Based on Web Content</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>34</fpage>
          -
          <lpage>50</lpage>
          . doi:
          <volume>10</volume>
          .1007/978- 3-
          <fpage>030</fpage>
          - 24781-
          <issue>2</issue>
          _
          <fpage>3</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Borodin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Ahmed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Islam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Puzis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Melnyk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Feng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. V.</given-names>
            <surname>Ramakrishnan</surname>
          </string-name>
          , G. Dausch,
          <article-title>Hearsay: A new generation context-driven multi-modal assistive web browser</article-title>
          ,
          <source>in: Proceedings of the 19th International Conference on World Wide Web, WWW '10</source>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New York, NY, USA,
          <year>2010</year>
          , p.
          <fpage>1233</fpage>
          -
          <lpage>1236</lpage>
          . doi:
          <volume>10</volume>
          .1145/1772690.1772890.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>J.</given-names>
            <surname>Cambre</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. C.</given-names>
            <surname>Williams</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Razi</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Bicking</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Wallin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Tsai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Kulkarni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kaye</surname>
          </string-name>
          ,
          <article-title>Firefox voice: An open and extensible voice assistant built upon the web</article-title>
          ,
          <source>CHI '21</source>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New York, NY, USA,
          <year>2021</year>
          . doi:
          <volume>10</volume>
          .1145/3411764.3445409.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>J.</given-names>
            <surname>Nielsen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Molich</surname>
          </string-name>
          ,
          <article-title>Heuristic evaluation of user interfaces</article-title>
          ,
          <source>in: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '90</source>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New York, NY, USA,
          <year>1990</year>
          , p.
          <fpage>249</fpage>
          -
          <lpage>256</lpage>
          . doi:
          <volume>10</volume>
          .1145/97243.97281.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>M.</given-names>
            <surname>Maguire</surname>
          </string-name>
          ,
          <article-title>Development of a heuristic evaluation tool for voice user interfaces</article-title>
          ,
          <source>in: A</source>
          .
          <string-name>
            <surname>Marcus</surname>
            , W. Wang (Eds.), Design,
            <given-names>User</given-names>
          </string-name>
          <string-name>
            <surname>Experience</surname>
          </string-name>
          ,
          <source>and Usability. Practice and Case Studies</source>
          , Springer International Publishing, Cham,
          <year>2019</year>
          , pp.
          <fpage>212</fpage>
          -
          <lpage>225</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>C.</given-names>
            <surname>Murad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Munteanu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. R.</given-names>
            <surname>Cowan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Clark</surname>
          </string-name>
          ,
          <article-title>Revolution or evolution? speech interaction and hci design guidelines</article-title>
          ,
          <source>IEEE Pervasive Computing</source>
          <volume>18</volume>
          (
          <year>2019</year>
          )
          <fpage>33</fpage>
          -
          <lpage>45</lpage>
          . doi:
          <volume>10</volume>
          .1109/MPRV.
          <year>2019</year>
          .
          <volume>2906991</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>C.</given-names>
            <surname>Murad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Munteanu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Clark</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. R.</given-names>
            <surname>Cowan</surname>
          </string-name>
          ,
          <article-title>Design guidelines for hands-free speech interaction</article-title>
          ,
          <source>MobileHCI '18</source>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New York, NY, USA,
          <year>2018</year>
          , p.
          <fpage>269</fpage>
          -
          <lpage>276</lpage>
          . doi:
          <volume>10</volume>
          .1145/ 3236112.3236149.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>D.</given-names>
            <surname>Norman</surname>
          </string-name>
          , The Design of Everyday Things, The MIT Press, MIT Press,
          <year>2013</year>
          . URL: https://books.google.it/books?id=heCtnQEACAAJ.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>B.</given-names>
            <surname>Shneiderman</surname>
          </string-name>
          ,
          <article-title>Designing the User Interface: Strategies for Efective Human-Computer Interaction</article-title>
          , 3rd ed.,
          <string-name>
            <surname>Addison-Wesley Longman</surname>
          </string-name>
          Publishing Co., Inc., USA,
          <year>1997</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>S.</given-names>
            <surname>Amershi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Weld</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Vorvoreanu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Fourney</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Nushi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Collisson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Suh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Iqbal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. N.</given-names>
            <surname>Bennett</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Inkpen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Teevan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Kikin-Gil</surname>
          </string-name>
          ,
          <string-name>
            <surname>E.</surname>
          </string-name>
          <article-title>Horvitz, Guidelines for human-ai interaction</article-title>
          ,
          <source>CHI '19</source>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New York, NY, USA,
          <year>2019</year>
          , p.
          <fpage>1</fpage>
          -
          <lpage>13</lpage>
          . doi:
          <volume>10</volume>
          .1145/3290605.3300233.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>R. J.</given-names>
            <surname>Moore</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>An</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.-J.</given-names>
            <surname>Ren</surname>
          </string-name>
          ,
          <article-title>The ibm natural conversation framework: a new paradigm for conversational ux design, Human-Computer Interaction 38 (</article-title>
          <year>2023</year>
          )
          <fpage>168</fpage>
          -
          <lpage>193</lpage>
          . doi:
          <volume>10</volume>
          . 1080/07370024.
          <year>2022</year>
          .
          <volume>2081571</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>A.</given-names>
            <surname>Pradhan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Mehta</surname>
          </string-name>
          , L. Findlater, ”
          <article-title>accessibility came by accident”: Use of voice-controlled intelligent personal assistants by people with disabilities</article-title>
          ,
          <source>CHI '18</source>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New York, NY, USA,
          <year>2018</year>
          , p.
          <fpage>1</fpage>
          -
          <lpage>13</lpage>
          . doi:
          <volume>10</volume>
          .1145/3173574.3174033.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>S. M.</given-names>
            <surname>Branham</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. R.</given-names>
            <surname>Mukkath</surname>
          </string-name>
          <string-name>
            <surname>Roy</surname>
          </string-name>
          ,
          <article-title>Reading between the guidelines: How commercial voice assistant guidelines hinder accessibility for blind users</article-title>
          ,
          <source>in: Proceedings of the 21st International ACM SIGACCESS Conference on Computers and Accessibility</source>
          ,
          <source>ASSETS '19</source>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New York, NY, USA,
          <year>2019</year>
          , p.
          <fpage>446</fpage>
          -
          <lpage>458</lpage>
          . doi:
          <volume>10</volume>
          .1145/3308561.3353797.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>E.</given-names>
            <surname>Corbett</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Weber</surname>
          </string-name>
          ,
          <article-title>What can i say? addressing user experience challenges of a mobile voice user interface for accessibility</article-title>
          ,
          <source>MobileHCI '16</source>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New York, NY, USA,
          <year>2016</year>
          , p.
          <fpage>72</fpage>
          -
          <lpage>82</lpage>
          . doi:
          <volume>10</volume>
          .1145/2935334.2935386.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>E.</given-names>
            <surname>Pucci</surname>
          </string-name>
          , I. Possaghi,
          <string-name>
            <given-names>C. M.</given-names>
            <surname>Cutrupi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Baez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Cappiello</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Matera</surname>
          </string-name>
          ,
          <article-title>Defining patterns for a conversational web</article-title>
          ,
          <source>CHI '23</source>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New York, NY, USA,
          <year>2023</year>
          . doi:
          <volume>10</volume>
          .1145/3544548. 3581145.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <surname>K. H. Khemani</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Reeves</surname>
          </string-name>
          ,
          <article-title>Unpacking practitioners' attitudes towards codifications of design knowledge for voice user interfaces</article-title>
          ,
          <source>in: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, CHI '22</source>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New York, NY, USA,
          <year>2022</year>
          . doi:
          <volume>10</volume>
          .1145/3491102.3517623.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <surname>Frog</surname>
          </string-name>
          , Cards for humanity,
          <year>2023</year>
          . URL: https://cardsforhumanity.frog.co/.
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <surname>Ontario</surname>
            <given-names>Government</given-names>
          </string-name>
          , Inclusive design cards,
          <year>2023</year>
          . URL: http://www.ontario.ca/page/ inclusive-design-cards.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <surname>Google</surname>
          </string-name>
          , Inclusive design works,
          <year>2023</year>
          . URL: https://inclusivedesignworks.app/.
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <surname>Microsoft</surname>
          </string-name>
          , Microsoft inclusive design,
          <year>2023</year>
          . URL: https://inclusive.microsoft.design/.
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <surname>AIxDesign</surname>
          </string-name>
          , Aixdesign,
          <year>2023</year>
          . URL: https://aixdesign.co/shop.
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>P.</given-names>
            <surname>Bailey</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. W.</given-names>
            <surname>White</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kumaran</surname>
          </string-name>
          ,
          <article-title>Responsible bots: 10 guidelines for developers of conversational ai</article-title>
          ,
          <source>ACM Transactions on The Web</source>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>