<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>H. Köse. Sign Recognition System for an Assistive Robot Sign
Tutor for Children. International Journal of Social Robotics</journal-title>
      </journal-title-group>
      <issn pub-type="ppub">1870-4069</issn>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.1109/ISIE.2015.7281531</article-id>
      <title-group>
        <article-title>language processing systems from socially aware codesign</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Soraia S. Prietch</string-name>
          <email>soraia@ufr.edu.br</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
          <xref ref-type="aff" rid="aff4">4</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>J. Alfredo Sánchez</string-name>
          <email>alfredo.sanchez@lania.edu.mx</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Josefina Guerrero García</string-name>
          <email>josefina.guerrero@correo.buap.mx</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>72592 Puebla</institution>
          ,
          <addr-line>Pue.</addr-line>
          ,
          <country country="MX">México</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Benemérita Universidad Autónoma de Puebla (BUAP), Avenida San Claudio</institution>
          ,
          <addr-line>Blvrd 14 Sur, Cdad. Universitaria</addr-line>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Laboratorio Nacional de Informática Avanzada (LANIA)</institution>
          ,
          <addr-line>Rébsamen 80, Xalapa, Ver, 91090</addr-line>
          ,
          <country country="MX">Mexico</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>Sign language</institution>
          ,
          <addr-line>Codesign, Good practices, Socially Aware Design, Automatic processing</addr-line>
        </aff>
        <aff id="aff4">
          <label>4</label>
          <institution>Universidade Federal de Rondonópolis</institution>
          ,
          <addr-line>Av. dos Estudantes n. 1576, Rondonópolis, MT, 78735900</addr-line>
          ,
          <country country="BR">Brazil</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2022</year>
      </pub-date>
      <volume>1</volume>
      <issue>477</issue>
      <fpage>13</fpage>
      <lpage>15</lpage>
      <abstract>
        <p>In this paper, we report on the discovery process that resulted from a series of semioparticipatory workshops with Deaf community codesigners. This process was part of the user research conducted to substantiate the design of solutions to support autonomy in communication and information access for deaf persons who are sign language users. In our user research, we carried out four semio-participatory workshops, as defined by the socially aware design approach. We invited members of a Sign Language community into a democratic design process, providing interested parties with opportunities to reflect on experiences and preferences in the context of automatic sign language processing (ASLP) systems. Our main contribution is the formulation of 63 socio-technical good practices for the design of ASLP systems organized at the social, pragmatic, semantic and syntactic levels for both their human and technological aspects. Two other contributions resulted from our literature review and workshop planning: Firstly, we formalized the steps necessary to engage with a deaf community in the codesign process. Secondly, we present an analysis of our research through the lenses of five calls to action: including Sign Language community members as codesigners, discussing real-world applications, broadening the concept of user interface guidelines to socio-technical good practices, identifying the challenges to find representative datasets, and discussing issues involved in standardized annotated videos in sign language. We thus present both empirical and methodological contributions to the field of Human-Computer Interaction.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        In this paper we present procedures and outcomes from conducting four workshops, following the
Socially Aware Design approach [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. By using this approach, we aimed to build automatic sign
language processing (ASLP) systems that are soundly based on user studies. As proposed by [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], ASLP
include three types of research categories: recognition, generation and translation. In brief, automatic
sign language recognition (ASLR) refers to systems which capture static or dynamic images and
movements of sign language communication as input and deliver speech or text in a written/oral
language as output. Automatic sign language generation (ASLG) refers to systems which capture
speech or text of a written/oral language as input and deliver animated avatar communicating in sign
language as output. Automatic sign language translation (ASLT) may refer to systems which perform
one or two-way translation between a sign language and a written/oral language, possibly using ASLR
or ASLG as part of its process.
      </p>
      <p>
        Our recent work [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]] has revealed a gap in that only few research projects that aim to build
ASLP systems are soundly based on user studies. We posit that ASLP system design should engage all
      </p>
      <p>
        2023 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
interested parties as part of the research team, in a collaborative perspective towards a universal design,
including Sign Language Community members and hearing persons with diverse backgrounds. In this
paper, we use the term Deaf Community or Sign Language Community to refer to a subpopulation
among the diverse and larger group of people who are D/deaf [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] and who are sign language users.
      </p>
      <p>
        With this human-centered approach as our research core, we have been conducting work on the
design of technology for, with and by a Deaf Community that builds upon the notions of Socially Aware
Design. Baranauskas [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] structured Socially Aware Design over the informal, formal and technical
levels of Hall’s culture theory [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. From this viewpoint, the design of a technical system takes into
consideration the lenses of the informal and formal levels of a given social group. By doing so,
researchers take into account the point of view from different stakeholders, paying attention to aspects
such as culture, values, behavior patterns and preferences from the informal perspective, and laws,
regulations, rules and policies from the formal perspective. These three perspectives situate the design
of interactive systems in a socioeconomic and cultural reality, which includes a diverse set of interested
parties as codesigners, leading to the construction of products based upon collaborative meanings. This
situated design process is organized into semio-participatory workshops, in which a set of artifacts
(informal, formal and technical) are used in inclusive participatory practices to mediate communication
and to register the entire codesign process.
      </p>
      <p>
        Our work with a Deaf Community has progressed through several stages, focusing mainly on the
design of ASLP systems. As a starting point for the codesign of such interactive systems, we conducted
a systematic literature review of user studies for the design of ASLP systems [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. In that work, we
analyzed four major aspects of primary studies: goals and research methods, user involvement and
design life cycle, cultural and collaborative aspects, and lessons learned from empirical works, focusing
on the human and the context components of a product design. Our notion of ASLP systems was
inspired in [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], which encompasses any automatic system to generate and to recognize sign language,
as well as to translate to and from sign language.
      </p>
      <p>
        In this paper, we share our understanding of socio-technical aspects that are involved in the design
of ASLP systems with a Deaf community as codesigners. In order to uncover evidence on
sociotechnical aspects, we relied on the guidance of the socially aware design approach by conducting four
semio-participatory workshops with Deaf community codesigners. This discovery process required
paying attention to details of what the codesign sessions with deaf signers revealed to us. Our main
contribution is the formulation of socio-technical good practices for the design of ASLP systems.
Additionally, we present two other contributions that resulted from conducting our literature review
and planning each workshop, as we realized they could help other researchers who carry out similar
work (e.g., with deaf communities or in the design and development of ASLP systems). The first
contribution is an analysis of various aspects of our research through the lenses of the five calls to action
proposed by [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], a research group that has been using an interdisciplinary approach to the design of
ASLP systems. The second contribution is a formalization of the steps we followed to engage with a
deaf community in the codesign process. We thus present both empirical and methodological
contributions [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] to the field of Human-Computer Interaction (HCI).
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. The Socially Aware Design Approach</title>
      <p>
        Socially Aware Design is a human-computer interaction approach, proposed by Baranauskas [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ],
grounded in the theories and concepts of culture studies, participatory design [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], organizational
semiotics [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] and principles of the design for all [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. In this section, we provide background on the
approach and its concrete artifacts as well as pointers to salient related work that has explored its use.
      </p>
      <p>
        The semio-participatory framework, a representation that is helpful in explaining Socially Aware
Design, considers society, or a sample of it, as a wrapper for three levels of culture study, as layers of
a semiotic onion [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], converging from the informal and formal levels to the technical level. This means
that in order to design a product at the technical level, we need to consider culture, beliefs, and everyday
life (from the informal level) as well as learned procedures (from the formal level) of the interested
parties. This implies that above human-centered design we should consider context-centered or
societycentered design. Taking these principles into account, semio-participatory workshops, using inclusive
participatory practices, are conducted to make sense of communication between interested parties
among those levels.
      </p>
      <p>
        Inclusive participatory practices involve participatory sessions with interested parties, providing
communication support, a physically accessible environment, and easy-to-use artifacts. Three concrete
artifacts were used during the semio-participatory workshops in this research: the Stakeholders
Identification Diagram, the Evaluation Frame and the Semiotic Framework [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]. In addition to the
Socially Aware Design recommended artifacts, we designed a Rating Scenarios artifact, and used it in
Semio-Participatory Workshop 2. This artifact presents ideas for scenarios inspired by related work
[[18], [19], [20], [21], [22].
      </p>
      <p>The Stakeholders Identification Diagram artifact, used in Semio-Participatory Workshop 1, is a
graphical representation consisting of five concentric circles: Starting from the center, a circle
represents Operation (intended solution), followed by Contribution (main actors and responsible
parties), Source (clients and suppliers), Market (partners and competitors) and Community (bystanders
and legislators). This means that stakeholders who are closer to the Operation are those who can
collaborate the most with the project. This artifact helps participants in identifying interested parties,
from the four above mentioned categories, who they believe would be key to participate in the codesign
process.</p>
      <p>The Evaluation Frame artifact, used in Semio-Participatory Workshops 3 and 4, supports
brainstorming sessions, where codesigners socialize questions and problems, as well as ideas and
solutions for the technology’s design, taking into account each actor in the Stakeholders Identification
Diagram artifact. For this research, we proposed an adaptation of the Evaluation Frame artifact, which
presents each identified stakeholder as well as a standard representation for questions and problems,
and ideas and solutions. Each element includes short and simplified texts side by side with a
representative image in a separate sheet shown one at a time for reducing memory overload.</p>
      <p>
        The Semiotic Framework (or Semiotic Ladder) [[
        <xref ref-type="bibr" rid="ref16">16</xref>
        ], [17]–used after conducting the four
SemioParticipatory Workshops–is an artifact used to organize and to make sense of the ideas collected. The
Semiotic Ladder artifact supports these activities by organizing socio-technical good practices–
considering a society-centered design–into six levels: social, pragmatic, semantic, syntactic, empirical,
and physical levels, from top to bottom in the ladder. The social level refers to the effects of system
use, such as expectations and culture. The pragmatic level refers to the system’s utility, such as
intentionality of a semiotic sign and communication. The semantic level refers to the meanings of the
interface elements, such as representative labels and icons. The syntactic level refers to the system’s
structure, such as the navigational model and standards. The empirical level refers to communication
channels using the infrastructure, such as databases and internet connection. The physical level refers
to the system’s infrastructure, such as memory, processing capacity and devices.
      </p>
      <p>A significant number of related works that have applied concepts and artifacts of Socially Aware
Design and conducted semio-participatory workshops have been reported in the literature. Some have
studied meta-communication in inclusive scenarios [23] or proposed to extend the approach to include
cultural and value perspectives to the design of interactive systems [24], whereas other have presented
frameworks for assistive technology design [[25], [26], [27], [28]].</p>
    </sec>
    <sec id="sec-3">
      <title>3. Research on automatic sign language processing (ASLP) systems in Mexico</title>
      <p>Since we conducted the field research in Mexico, we felt motivated to learn about characteristics of
local research on ASLP systems and types of user studies related to this topic. We were interested to
know whether there were research groups working on this topic, to understand research characteristics
and to confirm whether investigation has been conducted on user studies on this domain.</p>
      <p>In this exploratory study, using Google Scholar we found 43 works (14 in Spanish and 29 in English)
with publication years between 2002 and 2019. Nineteen works were published in 2016 and 2017, the
period of most interest in this topic in the country.</p>
      <p>The works we found were conducted in 25 institutions from eleven states and Mexico City (Ciudad
de México, CDMX). Some were carried out in collaboration with institutions from different states or
countries (such as the United States and Italy). Five states stand out with higher numbers of scientific
production on ASLP systems: CDMX (15), State of Mexico (6), Oaxaca (6), Puebla (6), and Veracruz
(6), whereas five institutions lead this production: Instituto Politécnico Nacional (IPN, 11), Universidad
Autónoma del Estado de México (UAEM, 8), Universidad Tecnológica de la Mixteca (UTM, 6),
Benemérita Universidad Autónoma de Puebla (BUAP, 4) and Universidad Veracruzana (UV, 4).</p>
      <p>Twenty-four papers come from eight research groups in six institutions, reporting advances of their
work. Six papers from UTM in Oaxaca, eight papers from three different groups of IPN in CDMX, five
papers from two groups of UAEM in México state (Teotihuacan Valley and Texcoco, respectively),
two papers from Universidad Tecnológica de Puebla (UTP) and BUAP, and three works from UV.
Based on the number of dedication years and number of publications, we can infer that studies on ASLP
systems in Mexico are stronger in these groups. Nineteen are isolated works from different research
groups with only one publication each.</p>
      <p>Among the 43 works we found, thirty-seven reported research conducted on Mexican Sign Language
(LSM, Lengua de Señas Mexicana), three on American Sign Language (ASL) and three did not specify
a sign language. Regarding the research focus within sign language processing studies, fifteen worked
merely on letters of the alphabet [[29], [30], [31], [32], [33], [34], [35], [36], [37], [38], [39], [40], [41],
[42], six on letters and words [[18], [43], [44], [45], [46], [47]], four only on words [[48], [49], [50],
[51]], four on sentences [[52], [53], [54], [55]], three on numbers and letters of the alphabet [[56], [57],
[58]], two on numbers, letters and words [[59], [60]], two on words and sentences [[61], [62]], one on
hand configuration and numbers [45], one on phonetic units [63], one on vowels [64] and four did not
mention their focus [[65], [66], [67], [68]].</p>
      <p>
        Considering that research on automatic sign language processing systems encompasses various
topics studied by many researchers around the world, we identified twenty-six works on ASLR systems,
thirteen on ASLT systems, two on ASLG associated with ASLR systems, one on a dictionary and one
on a database challenge contest. Amongst the ASLT systems group of papers, there are many with
ASLR systems characteristics. Therefore, there is a tendency in research on recognition of letters of the
alphabet, numbers, and isolated signs. This may “risk misrepresenting sign language recognition as a
gesture recognition problem, ignoring the complexity of sign languages as well as the broader social
context within which such systems must function” [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Six works present further advances by
conducting studies of sentence processing and one on phonetic units, which are approaches that show
a broader understanding of the complexity of sign language communication.
      </p>
      <p>With respect to potential users’ involvement, in the related work (Section 3), thirty works reported
some type of participation and thirteen did not mention anything about it. From works that mentioned
user involvement, twenty-one noted that individuals participated in the process to compose the authors’
database, thirteen in tests of the proposed system, two in planning of the database, one in the needs,
activities and context identification phase, and one in the opinion collection about the prototype design.
Six works (20%) explicitly reported involvement of deaf persons as participants [[29], [33], [39], [52],
[54], [60]]. Only one work (3,34%) has emphasis on the HCI field of research [33], where the author
conducted the four phases of the HCI life cycle [69] including potential users as participants. Moreover,
among these 20% works that are related to user studies, none neither investigated about context or
society-centered design nor proposed socio-technical good practices for the design of ASLP systems.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Research methodology outline</title>
      <p>
        In a previous study [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], we conducted interviews with members of the Sign Language community
in order to understand demographics data and socioeconomics and cultural aspects, and to invite them
to participate in the planned Semio-participatory Workshops. By member of the Sign Language
community, we refer to D/deaf persons, teachers and family members of D/deaf persons and interpreters
who may be D/deaf or hearing persons, but in either case are sign language users. Out of 11 interviewed
participants, 7 accepted the invitation to continue collaborating and signed an informed consent;
however, only 5 of them actually participated as codesigners. It is worth noting that ASLP systems are
intended to provide communication between D/deaf signers and hearing non-signers. For that matter,
we consider it important to have representatives of both categories of stakeholders in participating as
codesigners of such solution. Our goal as a research team is to gather a diverse group of collaborators;
This, however, does not impose a minimum number for each type of stakeholder in the team.
      </p>
      <p>We conducted four Semio-participatory Workshops, which took place in a classroom at the local
Association of the Deaf, as one-hour sessions every two weeks. The researchers (R1 and R2) were
familiar with the Sign Language community since they were taking LSM lessons at the Association;
however, they were not yet proficient enough to fully communicate with participants in LSM.
Therefore, in all sessions we had support of a LSM interpreter, who also participated as codesigner.
This interpreter helped recruit other members of the Sign Language community. The informed consent
was one of the documents analyzed and approved, along the research project, by the university ethics
committee for investigations conducted with human beings.</p>
      <p>In the inclusive participatory practices, we used four types of artifacts. For the workshops, artifacts
were printed, and participants could use colored markers, post-it notes, and stickers. Following the
Socially Aware Design approach, the three artifacts we used are: the Stakeholders Identification
Diagram, the Rating Scenarios, the Evaluation Frame, and the Semiotic Ladder. We filmed and
transcribed the inclusive participatory practices.</p>
    </sec>
    <sec id="sec-5">
      <title>5. The Semio-Participatory Workshops</title>
      <p>In this section, we present participants and procedures, as well as outcomes from the four
semioparticipatory workshops carried out biweekly for two months. Subsection 5.3 covers descriptions of
two workshops since we ran them back-to-back using the same artifact.
5.1.</p>
    </sec>
    <sec id="sec-6">
      <title>Workshop 1: Stakeholders Identification Diagram</title>
      <p>Semio-participatory Workshop 1 was conducted with five codesigners, in which participants from
the Sign Language Community included two D/deaf persons, a mother of a D/deaf person, a LSM
Interpreter and a Researcher. Three codesigners were women and two men, with age average of 38.6,
ranging from 19 to 52 years old.</p>
      <p>In our case of interest, the alternative solution is related to automatic sign language processing
(ASLP) systems for sign language recognition, generation and translation. As the main activity of the
Semio-participatory Workshop, we conducted the inclusive participatory practice using the original
Stakeholders Identification Diagram artifact translated into Spanish and including ASLP systems as the
Operation. In Operation (intended solution) is placed at the innermost and core circle of the
stakeholders’ layers in the artifact, which range from the most (closest) to the least (farthest from the
center) interested parties.</p>
      <p>The task of the codesigners –members of the Sign Language community and researchers as a team—
in this session was to identify the potential stakeholders. From the closest interested parties, in the
Contribution category referring to main actors and responsible parties, through the Source category
referring to clients and suppliers and the Market category referring to partners and competitors, to the
Community category referring to bystanders and legislators.</p>
      <p>As a deliverable of Semio-participatory Workshop 1, we produced a list of identified stakeholders
for each category who codesigners inferred could collaborate in the Socially Aware Design process of
ASLP systems. The four categories of the Stakeholders Identification Diagram artifact were elicited as
follows: (a) Contribution, with nine stakeholders; (b) Source, with five stakeholders; (c) Market, with
four stakeholders; and (d) Community, with two stakeholders.</p>
      <p>In total, participants identified twenty categories of representatives as relevant stakeholders to take
part as codesigners in the proposal of ASLP systems. Our participants belonged to four of those
participant categories and make up a sufficiently diverse mix for a first iteration of codesign.
5.2.</p>
    </sec>
    <sec id="sec-7">
      <title>Workshop 2: Rating Scenarios</title>
      <p>Semio-participatory Workshop 2 was conducted two weeks after the first one. Out of seven
codesigners, four participants were from the Sign Language Community–two deaf teachers, a mother,
and a teacher of D/deaf persons–, a LSM Interpreter and two Researchers. Four codesigners were
women and three men, with age average of 40.85, ranging from 20 to 57 years old.</p>
      <p>In the first Semio-participatory workshop, we noticed a need to explore other possibilities of
scenarios in which ASLP systems could be embedded, since at that moment we did not intend to
codesign user interfaces. We designed fifteen scenarios (Table 1) in which ASLP systems could support
communication and information access, reminding codesigners of the Stakeholders Identification
artifact in each scenario.</p>
      <p>In Semio-participatory Workshop 2, the Rating Scenarios artifact consisted in a set of previously
designed and printed materials, presented in a random order, and numbered at the top left corner. Also,
each scenario included a short text explanation next to an illustrative image at the center and, at the
bottom, a 5-point Likert scale with smiley faces (ranging from Dislike very much to Like very much),
which we called a like-scale. The Rating Scenarios artifact presents what has been proposed in literature
for different types of ASLP systems, so as codesigners who were not aware of such systems could get
a visual idea of possible related systems.</p>
      <p>Displaying on the wall and explaining each scenario at a time, we invited participants to indicate,
with a colored dot sticker, how much they liked the design alternative presented as the first task
everyone should accomplish. As a second task, after discussing and using the like-scale for each
individual scenario, we asked participants to number them from 1 to 15 according to their preferences
- 1 being the one they preferred the most, and 15 the least - and discussing their motivations. For this
second ranking, participants decided to form two groups, one with two D/deaf members and another
with two hearing members. The interpreter decided not to participate in this second ranking task and
researchers did not participate in any of the rating tasks.</p>
      <p>As a deliverable of Semio-participatory Workshop 2, we generated a spreadsheet, which compiles
scenario ratings from Task 1 (like-scale) and Task 2 (ordering). Among the 15 scenarios, six scenarios
selected (#1, #2, #3, #6, #11 and #15) were taken for voting in the Workshop 3 to identify the two
preferred solutions to design.
5.3.</p>
    </sec>
    <sec id="sec-8">
      <title>Workshops 3 and 4: Evaluation Frame</title>
      <p>Workshops 3 and 4 focused on the Evaluation Frame, in which, as prescribed by Socially Aware
Design, we brainstormed and ranked questions and problems, and ideas and solutions for each
stakeholder type.</p>
      <p>Semio-participatory Workshops 3 and 4 were conducted two and four weeks after Workshop 2,
respectively. Four codesigners participated in each workshop: two from the Sign Language
Community–a deaf teacher and a teacher of D/deaf persons; as well as two D/deaf teachers,
respectively–, a LSM Interpreter, and a Researcher. Three and two codesigners, respectively were
women, with age average of 32 and 41 ranging from 20 to 44, and 32 and 52 years old, respectively.</p>
      <p>The inclusive participatory practice began with the use of the Evaluation Frame artifact to carry out
the brainstorming on questions and problems and ideas and solutions for the design of ASLP systems.
Once the two scenarios were selected (#6 and #15), we conducted the session using an adaptation of
the Evaluation Frame artifact. The artifact’s adaptation consisted of framing an illustrative image for
each stakeholder identified in the Semio-participatory Workshop 1 next to its category and
identification, and by its side two blank spaces with images and labels representing questions and
problems, as well as ideas and solutions.</p>
      <p>In Semio-participatory Workshop 3, scenario #15 was ranked as the first preferred scenario, with
the argument that it is useful in a broad set of situations, from information access to large contents in
written language to data exchange through diverse communication means (e.g., email, instant
messaging). The scenario #6 was ranked as the second preferred scenario, for which researchers argued
the motives for keeping it, since the main idea is to provide face-to-face bidirectional communication
between signers and non-signers.</p>
      <p>In order to analyze the Semio-participatory Workshops 3 and 4 data, we organized data collected
during Workshop 1. This organization consisted in grouping together some of the potential interested
parties into a more general type of stakeholder. Thus, from the original twenty stakeholder’s types, we
converged into eighteen types. The deliverable of these two Workshops was a table with questions and
problems, and ideas and solutions raised by codesigners, whose results we used to plot into the Semiotic
Ladder artifact to organize the set of socio-technical good practices we have derived.</p>
    </sec>
    <sec id="sec-9">
      <title>6. Socio-technical design good practices mapped onto the Semiotic Ladder</title>
    </sec>
    <sec id="sec-10">
      <title>Artifact</title>
      <p>From Section 5 we can observe that the results from applying the artifacts (Stakeholders
Identification Diagram, Rating Scenarios and Evaluation Frame) are linked, in the sense that a
workshop depends on the results from a previous one. For instance, in Semio-participatory Workshop
1, the stakeholder “School teachers” was elicited as an actor who can contribute in the codesign of
ASLP systems. In Semio-participatory Workshop 2, codesigners discussed the possible scenarios in
light of the stakeholders elicited, so they were asked questions such as “Can you imagine if this scenario
was available for your schoolteacher to work with you as a student? Would this be positive or negative?”
to serve as a concrete example to reflect about. In Semio-participatory Workshop 3 and 4, codesigners
brainstormed questions and problems, and ideas and solutions related to stakeholders identified in
Semio-participatory Workshop 1 and to the two most preferred scenarios. Therefore, codesigners were
invited to imagine what kind of questions and problems, and ideas and solutions could arise in a concrete
scenario. For instance, a situation where a schoolteacher in a mainstream classroom with many mixed
students (signers and non-signers) wants to conduct a pedagogical activity using the scenario #15 as a
mediator for communication or collaboration between pairs of students.</p>
      <p>As a result, we found a higher number of items on good practices (46) for the “Human information
functions” (social, user experience and HCI aspects) than the number of items (17) for the “The IT
platform” (technical aspects). Since we were more interested in the human and context aspects for the
design of ASLP systems, this list of good practices is a contribution to this line of study that requires
an interdisciplinary approach.</p>
      <p>From top to bottom of the Semiotic ladder, firstly, we present the three “Human information
functions:” Social, Pragmatic and Semantic levels.</p>
      <p>1. Social level. Since nowadays still there are many misconceptions about universal sign language
(SL), full literacy in written language, communication homogeneity, oralization, among others,
the guideline “Educate people about Deaf culture and deaf people’s rights” is an important issue
to address in many levels of the technologies design. From literature review in this paper and
from a previous systematic review, we found that most of the research groups are creating their
own sign language database, since there is not a public repository with standardized data for SL
of each country. This leads us to the guideline “Have political support to make a distributed
database an official location to share and to receive standardized data (annotated SL videos) as
an open science repository to support inter-disciplinary research”. One concern of many in the
Deaf community is they are contacted by many researchers when they need to substantiate their
projects or to fulfil some work agenda. Unfortunately, when projects are “completed” or the
workload is finished, they disappear. That is why we included the guideline “Understand who
can be potential supporters in the general community to guarantee sustainability of the
technologies’ adoption and maintenance”. Table 2 presents the recommended good practices for
the Social level of the Semiotic Ladder Artifact.
2. Pragmatic level. The three good practices here meet perspectives of the above three social levels
in a more practical way, respectively, “Invite users to switch roles concerning mode of
communication”, “Populate the database with users’ collaboration”, and “Record/ register the
entire conversation, translations”, since they can motivate people to know about Deaf culture,
can support the data gathering for a national initiative, and can provide motivation for
government or companies to sustain the infrastructure and services for the technology
continuance of use. Table 3 presents the recommended good practices for the Pragmatic level of
the Semiotic Ladder Artifact.
3. Semantic level. The guideline "Provide textual, audio and video tutorials" is to ensure a broad
range of types of users can access and know how to use technologies. In many contexts of use, it
is important to keep track of a conversation history, such as medical consultation, academic or
lawyer advising; this leads to the guideline "Automatically create a conversation timeline to save
translations". The guideline "Recommend usefulness of the result presented by the search or the
translation" can support assessment of the technology use for its improvement. Table 4 presents
the recommended good practices for the Semantic level of the Semiotic Ladder Artifact.</p>
      <p>From top to bottom of the Semiotic ladder, secondly, we present the three “The IT platform”:
Syntactic, Empirical and Physical levels.</p>
      <p>4. Syntactic level. The guideline “Allow users to adjust accessibility settings” can be related to
adjustments of color contrast, font size, video/ animation display size, SL to speech for interaction
between deaf and blind persons, speed of information presentation, following the adequate
standards. Many concerns towards data privacy in specific contexts of use were reported, in spite
wanting to keep the record of translations to themselves; the ideal situation is having transparency
and well-defined norms to “Provide different privacy protocols for users’ data depending on the
facilities’ nature”. Table 5 presents the recommended good practices for the Syntactic level of
the Semiotic Ladder Artifact.
1. Allow users to adjust accessibility settings
2. Ensure privacy of patients’ personal medical records
3. Provide different privacy protocols for users' data depending on the facilities’ nature
5. Empirical level. The guideline "Self-adapt SL recognition to users with multiple disabilities"
regards to specificities of deaf signers who have another disability associated, in order to support
positioning in the right place, avoid clicking the wrong icon, recognizing the adequate hand
configuration, avoiding feedback that presses the user for quick action, among others. Since the
ASLP systems could be embedded in many types of contexts of use, one concern was about the
specialized vocabulary used, either to deaf signers who still have to collaboratively create with
community new signs as they further achieve higher academic education in diverse areas of
knowledge or to guarantee a robust and diverse SL database. For this, the guideline "Gather data
from diverse specific' domains" was pro-posed. Table 6 presents the recommended good
practices for the Empirical level of the Semiotic Ladder Artifact.
6. Physical level. The guideline "Design one large screen for all, or individual screens for each"
attends to expectations for private and public technology use, as well as for individual and
collective technology use of a text to SL translator system. As part of the technical planning for
supporting both scenarios of technology chosen by participants, the guideline "Determine
protocol and infrastructure for information storage" was included. Table 6 presents the
recommended good practices for the Physical level of the Semiotic Ladder Artifact.</p>
    </sec>
    <sec id="sec-11">
      <title>7. Conclusion</title>
      <p>The primary goal of this research has been to share our understanding of socio-technical aspects that
are involved in the design of ASLP systems with Deaf community members as codesigners. In order to
uncover evidence on socio-technical aspects, we relied on the guidance of the socially aware design
approach by conducting four semio-participatory workshops with Deaf community codesigners. By
presenting how research was conducted, its results and socio-technical good practices we derived, we
highlight the importance of considering not only potential users at the center of the design, but also their
ecosystem (main actors, responsible parties, clients, suppliers, partners and competitors, bystanders and
legislators) and the research impact of this broader view.</p>
      <p>Our invitation for a Sign Language community to participate in this democratic design process
resulted in an opportunity for all to reflect on and to share social and technical concerns regarding past
experiences and personal preferences in the context of ASLP systems. Participants were more timid in
the first workshop, and more participative in the next ones. They understood the potential benefits this
type of technology can bring to their lives or to their children or relatives. One deaf participant reported
twice she was having fun being a codesigner, she was learning from discussions, and perceiving her
ideas were being valued by others and for the project.</p>
      <p>
        We provided a set of sixty-three experience-grounded socio-technical good practices for the design
of ASLP systems from which we can follow to the next steps of the research. We recall that the two
scenarios chosen by codesigners were #6 (Real-time and in-person communication mediated by a glass
interface or a screen (signer and non-signer)) and #15 (Translation from text to SL). These good
practices mostly can be applied to both scenarios. However, the social level brings to the table more
concerns for Scenario 1, since for many stakeholders’ types it involves the conversation between two
persons in public environments. Some recurrent codesigner concerns had to do with the use of data
from translations versus privacy, educating people about Deaf culture and sign language, and learning
how researchers will deal with sign language specificities, such as domain specific language,
regionalisms, and providing other visual cues to facilitate understanding and to ensure a positive user
experience. These findings and good practices are not set in stone, as we understand the need to
complement them with other categories of stakeholders. However, we disclose this contribution to
invite whoever seeks to further investigate this subject in an interdisciplinary research team to broaden
their views to include the human and context aspects. Also, Artificial Intelligence researchers could
benefit from our set of good practices in order to discuss topics related to Fairness, Accountability,
Transparency, and Ethics (FATE) since relevant issues still need to be further addressed [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ].
      </p>
      <p>Looking back at the entire process, we share a discovery process in light of the Socially Aware
Design approach. This discovery process is related to the way inclusive participatory practices can be
conducted and artifacts can be adapted to promote participants’ engagement in the design of solutions
with Deaf community codesigners. We found that an initial stimulus activity related to the topic to be
discussed (e.g., presenting a video or an app, making a conversation with questions, inviting for a vote)
can be ice-breaking while eliciting data. The core session with an artifact must not take too long. The
choice of day and time as well as having a snack help keep energy at a high level. Artifacts that present
the same information in different formats (e.g., short, simplified text and image) and are explained in
the preferred language of the participants ensure inclusiveness. Sharing the workshop content
beforehand with the interpreter can help him or her feel more relaxed and enjoy the activity. Colorful
supplies, such as post-its and sharpies, different kinds of stickers (dots, smiley faces, numbers, thumbs
up and down) can be seen superfluous, but participants get excited to choose colors and feel motivated
to collaborate. Finally, comments and testimonies of participants are valuable, and they should be
informed about this by asking them to write down their thoughts on post-its, and to further discuss in
their preferred mode of communication.</p>
      <p>
        Additionally, taking the five calls to action proposed by [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], in this research we included Sign
Language community as codesigners (Call 1) strengthening bonds with a local association and school,
not only conducting the workshops, but also participating in social activities and taking LSM lessons
along with them. During semio-participatory workshops, we had the opportunity to discuss real-world
applications (Call 2), relating potential stakeholders to potential scenarios of technology use and to
problems based on their previous and current life experiences and ideas for solutions. Moreover, we
broadened the concept of user interface (UI) guidelines (Call 3) to socio-technical good practices, with
results presented in Section 5, in which UI is represented at the semantic level of the semiotic ladder
artifact. We identified in literature reviews and in discussions with the Deaf community the difficulty
to find public, representative dataset curation (Call 4), especially standardized annotated videos in sign
language (Notation standards and support, Call 5). A recurrent topic within the socio-technical good
practices was with respect to modeling, building and managing a distributed database. In Mexico, some
papers report the use of the video library of the DIELSEME, however, twenty-one papers on ASLP
system from Mexico refer to collecting their own dataset.
      </p>
      <p>We had planned at least eight semio-participatory workshops in addition to the four described in this
paper. Unfortunately, the COVID-19 pandemic made us rearrange our plans. We were not able to
conduct remote workshops, since most participants come from low-income families, and they did not
have infrastructure to continue. A remote continuation of the research was possible with Brazilian
participants, which currently is an ongoing effort.</p>
    </sec>
    <sec id="sec-12">
      <title>8. Acknowledgements</title>
      <p>We thank the Doctoral Program in Educational Systems and Environments of BUAP, the Directorate
for Special Education of the State’s Ministry of Education (SEP), “Casa del Sordo” and “CAM Jean
Piaget,” all in Puebla, for their willingness to collaborate with our research. We thank all the volunteers
for their time and for sharing their knowledge with the research team. The project is registered in
Plataforma Brasil (CAAE) under No. 18708619.2.0000.8088.</p>
    </sec>
    <sec id="sec-13">
      <title>9. References</title>
      <p>[33] M. E. Gutiérrez-Martínez. 2018. Modelo de redes neuronales para el reconocimiento de señas en
un contexto de levantamiento de una denuncia por robo. Tesis de Maestría en Sistemas Interactivos
Centrados en el Usuario, Universidad Veracruzana, Xalapa, Veracruz.
[34] B. Martínez-Seis, O. Pichardo-Lagunas, E. Rodriguez-Aguilar, E. Saucedo-Diaz. Identification of
Static and Dynamic Signs of the Mexican Sign Language Alphabet for Smartphones using Deep
Learning and Image Processing. Research in Computing Science, 148, 2019, 199-211.
[35] E.M. Morales, O.V. Aparicio, P. Arguijo, R.Á. Armenta, A.H. López.. Traducción del lenguaje de
señas usando visión por computadora. Research in Computing Science, 148, 2019, 79-89.
[36] F. P. Pérez-Priego, J. M. Ceja-Olivares, J. F. Talamantes-Serrano, D. N. Rivera-Aguilar. Image
recognition of Mexican Sign Language. In: Castillo Montiel, E.; Chimal Eguía, J.C.; Uriarte Arcia,
A.; Cabrera Rivera, L. (Ed.). Research in Computing Science, vol. 58, pp 57-68, 2012, Retrieved
August 13, 2020, from https://www.rcs.cic.ipn.mx/2012_58/RCS_58_2012.pdf
[37] P. P. E. Rivas, O. Velarde-Anaya, S. González-López, P. P. Rivas, N. A. Álvarez-Torres.</p>
      <p>Entrenamiento de una Red Neuronal para el Reconocimiento de Imágenes de Lengua de Señas
Capturadas con Sensores Profundidad. Congr. Int. en Ing. Electrónica. Mem. ELECTRO, Vol. 39,
pp. 55-59, 2017. Retrieved August 06, 2020, from https://arxiv.org/abs/1804.00508
[38] M. Rivera-Acosta, S. Ortega-Cisneros, J. Rivera, F. Sandoval-Ibarra. American Sign Language
Alphabet Recognition Using a Neuromorphic Sensor and an Artificial Neural Network. Sensors
2017, 17, 2176; doi:10.3390/s17102176.
[39] G. González-Saldaña, J. Sánchez-Cerezo, M. M. Díaz-Bustillo, A. Pérez-Ata. Recognition and</p>
      <p>Classification of Sign Language for Spanish. Computación y Sistemas, Vol. 22, 1, 2018, 271–277.
[40] F. Solís, C. Toxqui, D. Martínez. Mexican sign language recognition using Jacobi-Fourier
moments. Engineering,07,700-705, 2015. DOI: 10.4236/eng.2015.710061.
[41] F. Trujillo-Romero, F. E. Luis-Pérez, S. O. Caballero-Morales. Multimodal Interaction for Service
Robot Control. 22nd International Conference on Electrical Communications and Computers
(CONIELECOMP 2012), DOI: 10.1109/CONIELECOMP.2012.6189929.
[42] F. Trujillo-Romero, S. O. Caballero-Morales. 3D Data Sensing for Hand Pose Recognition.</p>
      <p>International Conference on Electronics, Communications and Computers (CONIELECOMP
2013), IEEE. DOI: 10.1109/CONIELECOMP.2013.6525769.
[43] S. O. Caballero-Morales, F. Trujillo-Romero. 3D Modeling of the Mexican Sign Language for a</p>
      <p>Speech-to-Sign Language System. Computación y Sistemas Vol. 17 No. 4, 2013 pp. 593-608.
[44] C. R. Estrivero-Chavez, M. A. Contreras-Teran, J. A. Miranda-Hernandez, J. J. Cardenas-Cornejo,
M. A. Ibarra-Manzano, D. L. Almanza-Ojeda. Toward a Mexican Sign Language System using
Human Computer Interface. International Conference on Mechatronics, Electronics and
Automotive Engineering (ICMEAE 2019), DOI: 10.1109/ICMEAE.2019.00010.
[45] J. Ibarra-Leybón, M. del R. Barba-Ramírez, V. Picazo-Taboada. "SENSor Foto-Eléctrico Aplicado
al Movimiento de los Dedos de las Manos". Computación y Sistemas Vol. 10, 1, 2006, 57-68.
[46] L. M. Pérez, A. J. Rosales, F. J. Gallegos, A. V. Barba. LSM static signs recognition using image
processing. 14th International Conference on Electrical Engineering, Computing Science and
Automatic Control (CCE) Mexico City, Mexico, September 20-22, 2017.
[47] F. Trujillo-Romero, S. O. Caballero-Morales. Towards the Development of a Mexican
Speech-toSign-Language Translator for the Deaf Community. Acta Universitaria, vol. 22, marzo, 2012, pp.
83-89, Universidad de Guanajuato, Guanajuato, México, ISSN: 0188-6266.
[48] J. Cervantes, F. García-Lamont, L. Rodríguez-Mazahua, A.Y. Rendon, A.L. Chau. Recognition of
Mexican Sign Language from Frames in Video Sequences. In: Huang DS., Jo KH. (eds) Intelligent
Computing Theories and Application. ICIC 2016. Lecture Notes in Computer Science, vol 9772.</p>
      <p>Springer, Cham. https://doi.org/10.1007/978-3-319-42294-7_31.
[49] G. García-Bautista, F. Trujillo-Romero, S. O. Caballero-Morales. Mexican Sign Language
Recognition Using Kinect and Data Time Warping Algorithm. International Conference on
Electronics, Communications and Computers (CONIELECOMP 2017), DOI:
10.1109/CONIELECOMP.2017.7891832.
[50] I. Guyon, V. Athitsos, P. Jangyodsuk, H. J. Escalante. The ChaLearn gesture dataset (CGD 2011).</p>
      <p>Machine Vision and Applications 25, 1929–1951 (2014).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>M. C. C Baranauskas</surname>
          </string-name>
          .
          <article-title>Socially aware computing</article-title>
          .
          <source>In: VI International Conference on Engineering and Computer Education (ICECE</source>
          <year>2009</year>
          ),
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>D.</given-names>
            <surname>Bragg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Koller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bellard</surname>
          </string-name>
          , L. Berke, Boudreault,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Braffort</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Caselli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            ,
            <surname>Huenerfauth</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Kacorri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            ,
            <surname>Verhoef</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            ,
            <surname>Vogler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            ,
            <surname>Morris</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Sign Language Recognition, Generation, and Translation: An Interdisciplinary Perspective</article-title>
          .
          <source>In The 21st International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '19)</source>
          . (
          <year>2019</year>
          ),
          <fpage>16</fpage>
          -
          <lpage>31</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S. S.</given-names>
            <surname>Prietch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Pineda-Olmos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. dos S.</given-names>
            <surname>Paim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Calleros-Gonzalez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>García-Guerrero</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Resmini</surname>
          </string-name>
          .
          <article-title>Discussion on Image Processing for Sign Language Recognition: An overview of the problem complexity</article-title>
          . In: V Jornadas
          <string-name>
            <surname>Iberoamericanas de Interacción Humano-Ccomputador</surname>
          </string-name>
          ,
          <year>2019</year>
          , Puebla/MX.
          <article-title>Research and development of new technology</article-title>
          .
          <source>Puebla/MX: Fondo Editorial BUAP.</source>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>S. S.</given-names>
            <surname>Prietch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Calleros-Gonzalez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Sánchez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Pineda-Olmos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>García-Guerrero</surname>
          </string-name>
          .
          <article-title>Cultural aspects in the user experience design of an ASLR system</article-title>
          .
          <source>In Proceedings of the IX Latin American Conference on Human Computer Interaction (CLIHC '19). Article 37</source>
          ,
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          ,
          <year>2019</year>
          . DOI:https://doi.org/10.1145/3358961.3359000.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S. S.</given-names>
            <surname>Prietch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. dos S.</given-names>
            <surname>Paim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Pineda-Olmos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>García-Guerrero</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Calleros-Gonzalez</surname>
          </string-name>
          .
          <article-title>The Human and the Context Components in the Design of Automatic Sign Language Recognition Systems</article-title>
          . In: Ruiz P.,
          <string-name>
            <surname>Agredo-Delgado</surname>
            <given-names>V</given-names>
          </string-name>
          .
          <article-title>(eds) Human-Computer Interaction</article-title>
          . HCI-COLLAB
          <year>2019</year>
          .
          <source>Communications in Computer and Information Science</source>
          , vol
          <volume>1114</volume>
          . Springer,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>R.</given-names>
            <surname>Harris</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.M.</given-names>
            <surname>Holmes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.M.</given-names>
            <surname>Mertens</surname>
          </string-name>
          . Research Ethics in Sign Language Communities.
          <source>Sign Language Studies</source>
          <volume>9</volume>
          (
          <issue>2</issue>
          ),
          <fpage>104</fpage>
          -
          <lpage>131</lpage>
          . doi:
          <volume>10</volume>
          .1353/sls.0.
          <fpage>0011</fpage>
          . (
          <year>2009</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>E. T.</given-names>
            <surname>Hall</surname>
          </string-name>
          .
          <source>The Silent Language. Anchor Books Editions</source>
          . (
          <year>1990</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>S. S.</given-names>
            <surname>Prietch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Sánchez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>García-Guerrero</surname>
          </string-name>
          .
          <article-title>A Systematic Review of User Studies as a Basis for the Design of Systems for Automatic Sign Language Processing</article-title>
          .
          <source>ACM Transactions on Accessible Computing (TACCESS)</source>
          ,
          <year>2022</year>
          . [In process to be published]
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>S. S.</given-names>
            <surname>Prietch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Sánchez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>García-Guerrero</surname>
          </string-name>
          .
          <article-title>Understanding cultural aspects of deaf communities in México towards the codesign of automatic sign language processing systems</article-title>
          .
          <source>Journal on Interactive Systems</source>
          , Porto Alegre,
          <string-name>
            <surname>RS</surname>
          </string-name>
          , v.
          <volume>13</volume>
          , n. 1, p.
          <fpage>15</fpage>
          -
          <lpage>25</lpage>
          ,
          <year>2022</year>
          . DOI:
          <volume>10</volume>
          .5753/jis.
          <year>2021</year>
          .964. Available at: https://sol.sbc.org.br/journals/index.php/jis/article/view/964.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>J. O.</given-names>
            <surname>Wobbrock</surname>
          </string-name>
          . Seven research contributions in HCI. Unpublished. (
          <year>2012</year>
          ). Available at http://faculty.washington.edu/wobbrock/pubs/Wobbrock-2012.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>D.</given-names>
            <surname>Bragg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Caselli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Hochgesang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Huenerfauth</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Katz-Hernandez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Koller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Kushalnagar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Vogler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. E</given-names>
            <surname>Ladner</surname>
          </string-name>
          .
          <article-title>The FATE Landscape of Sign Language AI Datasets: An Interdisciplinary Perspective</article-title>
          .
          <source>ACM Trans. Access. Comput.</source>
          ,
          <volume>14</volume>
          , 2,
          <string-name>
            <surname>Article 7</surname>
          </string-name>
          (
          <year>July 2021</year>
          ),
          <volume>45</volume>
          pages.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>D.</given-names>
            <surname>Schuler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Namioka</surname>
          </string-name>
          . Participatory Design:
          <article-title>Principles and Practices</article-title>
          . Lawrence Erlbaum Associates, Hillsdale,
          <year>1993</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>K.</given-names>
            <surname>Liu</surname>
          </string-name>
          .
          <source>Semiotics in Information Systems Engineering</source>
          . Cambridge University Press,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>B. R.</given-names>
            <surname>Connell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Jones</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Mace</surname>
          </string-name>
          , et al.
          <source>The Principles of Universal Design, Version</source>
          <volume>2</volume>
          .0.
          <string-name>
            <surname>Raleigh</surname>
          </string-name>
          ,
          <article-title>The Center for Universal Design</article-title>
          ,
          <string-name>
            <surname>NC</surname>
          </string-name>
          : North Carolina State University (NCSU),
          <year>1997</year>
          . Retrieved from https://projects.ncsu.edu/ncsu/design/cud/about_ud/udprinciplestext.htm at 09/13/
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>R.</given-names>
            <surname>Stamper</surname>
          </string-name>
          .
          <source>Information in Business and Administrative Systems</source>
          . Wiley Inc., New York,
          <year>1973</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>J. V. da Silva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Pereira</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. B.</given-names>
            <surname>Buchdid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. F.</given-names>
            <surname>Duarte</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. C. C.</surname>
          </string-name>
          <article-title>Baranauskas. SAwD - Socially aware design: An organizational semiotics-based CASE tool to support early design activities</article-title>
          .
          <source>In: Socially Aware Organisations and Technologies, Impact and Challenges: 17th IFIP WG 8.1</source>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>