<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Design Methods for Artificial Intelligence Fairness and Transparency</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Simone Stumpf</string-name>
          <email>Simone.Stumpf.1@city.ac.uk</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lorenzo Strappelli</string-name>
          <email>Lorenzo.Strappelli@city.ac.uk</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Subeida Ahmed</string-name>
          <email>Subeida.Ahmed@city.ac.uk</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Yuri Nakao</string-name>
          <email>nakao.yuri@fujitsu.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Aisha Naseer</string-name>
          <email>Aisha.Naseer@uk.fujitsu.com</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Giulia Del Gamba</string-name>
          <email>giulia.delgamba@intesasanpaolo.com</email>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Daniele Regoli</string-name>
          <email>daniele.regoli@intesasanpaolo.com</email>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>City, University of London</institution>
          ,
          <addr-line>Northampton Square, London</addr-line>
          ,
          <country country="UK">UK</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Fujitsu Laboratories Ltd.</institution>
          ,
          <addr-line>Kawasaki</addr-line>
          ,
          <country country="JP">Japan</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Fujitsu Laboratories of Europe</institution>
          ,
          <addr-line>Hayes</addr-line>
          ,
          <country country="UK">UK</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>Intesa Sanpaolo S.p.A.</institution>
          ,
          <addr-line>Turin</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Fairness and transparency in artificial intelligence (AI) continue to become more prevalent as topics for research, design and development. General principles and guidelines for designing ethical and responsible AI systems have been proposed, yet there is a lack of design methods for these kinds of systems. In this paper, we present CoFAIR, a novel method to design user interfaces for exploring fairness, consisting of series of co-design workshops, and wider evaluation. This method can be readily applied in practice by researchers, designers and developers to create responsible and ethical AI systems.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;fairness</kwd>
        <kwd>transparency</kwd>
        <kwd>explanations</kwd>
        <kwd>design</kwd>
        <kwd>methods</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        in practice, it is up to the designer or
developer to craft appropriate ways to implement
It has been realised that Artificial intelligence this guideline.
and machine learning pose unique design chal- Google’s Responsible AI practices [
        <xref ref-type="bibr" rid="ref37 ref43">11, 12</xref>
        ]
lenges that merit new design practices [6, 7, suggest that ethical AI systems should be
de8, 9]. In the last few years, a number of ap- signed following best practices for software
proaches have been suggested to ease the de- systems but then supplemented with
considsign and development of responsible and eth- erations specific to machine learning.
Overical AI systems. Here, we present an overview all a human-centered design approach shoudl
of guidelines to designing ethical AI systems, be followed to actively consider fairness,
inbefore turning to describing work that aims terpretability, privacy and security from the
to address design patterns and methods. outset. Specific advice for designing the user
experience for AI systems has been given by
2.1. Design Guidelines the People + AI Handbook [
        <xref ref-type="bibr" rid="ref43">12</xref>
        ], such as
identifying user needs and their mental models,
Considerable thought has been given to pro- or addressing explainability and trust. While
viding guidelines for designing and develop- these guidelines do not explicitly surface
fairing these ethical AI systems. The most well- ness as a specific consideration, it is covered
known of these have been developed by Mi- when collecting and evaluating data and also
crosoft, Google and IBM, with some eforts in communicating with users.
also being produced by the High-Level Ex- IBM’s Everyday Ethics for Artificial
Intellipert Group (HLEG) on AI set up by the Eu- gence [
        <xref ref-type="bibr" rid="ref35">13, 14</xref>
        ] suggests five areas to focus on
ropean Commission. We will briefly review in the development of ethical AI systems:
acthese eforts but see [
        <xref ref-type="bibr" rid="ref61">3</xref>
        ] for a comprehensive countability, value alignment, explainability,
survey of AI ethics guidelines. fairness and user data rights. The guidelines
      </p>
      <p>
        Microsoft’s Guidelines for Human-AI In- present a rationale of why these aspects
reteractions [
        <xref ref-type="bibr" rid="ref1 ref21 ref28 ref29 ref39 ref45 ref48 ref59 ref9">10</xref>
        ] as part of their Responsible quire attention, make recommendations for
AI area are implemented as a set of eighteen actions to take and for questions the design
cards. Each card describes a guideline and team should consider, and provide examples
some examples of how that guideline might of implementations.
apply in practice, over four stages of use: ‘ini- The HLEG on AI ethics guidelines for
trusttially’, ‘during interaction’, ‘when wrong’, worthy AI [15] set out a framework for
ethand ‘over time’. These guidelines provide de- ical principles and associated requirements
signers and developers with high-level con- that should be covered in AI development. In
siderations to make during the design pro- applying this framework, the report suggests
cess. For example, guideline 6 prompts to "mit- adopting both technical and non-technical
igate social biases" during interaction by en- methods, such as transparency-by-design or
suring that “the AI system’s language and be- inclusive design teams. In order to assess that
haviors do not reinforce undesirable and un- AI has been developed in accordance with
fair stereotypes and biases.” Guideline 11 is these principles and requirements, the report
to “make clear why the system did what it also puts forward a checklist to be used within
did” when wrong and suggests to “enable the design practices.
user to access an explanation of why the AI While guidelines to develop responsible and
system behaved as it did”. While each comes ethical AI have some use to stimulate
discuswith an example of how this might be realised sions within design teams about high-level
concepts and requirements that need to be tems using a structured process. At the
momet, as noted previously [16], these guide- ment, most of the guidelines mentioned in
lines are fairly abstract and are dificult for section 2.1 suggest adopting a User-Centred
designers and developers to implement into Design (UCD) process involving user research,
practice. designing and prototyping and evaluating,
using techniques such as interviews,
observa2.2. Design Patterns tions, and user testing. Yet given that many
have argued that AI system design pose
sigCurrently, there is a lack of design patterns nificant challenges [6, 7, 8], there is yet a dearth
for AI systems, which tells designers and de- of work that addresses design methods that
velopers what to design. In HCI and data vi- guide designers and developers to develop
resualisation, design patterns for common use sponsible AI.
cases and scenarios on well-studied technolo- Very recently, design methods have been
gies are readily available1. These tell design- proposed that focus on designing AI algorithms
ers and developers how to support interac- with users. WeBuildAI [22] proposes a
frametions and communications through a user in- work of steps that involves users in
designterface. Similarly, there has been a line of re- ing algorithms. This method proceeds by
insearch in Explainable AI (XAI) that aims to vestigating feature engineering and selection
establish what information to communicate through surveys and interviews, model
buildand what interactions to support in order to ing through pair-wise comparison of use by
make a system transparent. High-level prin- users, and finally model selection through
exciples for explainability and controllabity have posing the model decisions.
been proposed [17], such as ‘be sound’, ‘be The most well-known attempt to establish
complete’, ‘be actionable’, and ‘be reversible’. a design method for ethical AI user interfaces
      </p>
      <p>In addition, there is a emerging body of re- is transparency design [5]. This work
prosearch that aims to investigate what is most poses a stage-based process to first
investiefective in terms of user interfaces that pro- gate mental models of experts and then users
vide explanations. A lot of work has focused to establish a target mental model of what
on what information should be available to needs to be explained, before iteratively
prousers and how this information should be com- totyping the user interface to establish how
municated via text, graphics or visualizations to communicate the explanations and then
[18, 19, 20, 21, 14]. A recent efort to start de- evaluating it. To develop the mental
modveloping design patterns [4], backed by cog- els of experts, interviews and workshops are
nitive psychology, has suggested links (or pat- suggested, while to investigate users’
menterns) of how people should reason, how peo- tal models it is suggested to employ surveys,
ple actually reason, and how to generate ex- interviews, task-based studies and drawing
planations that support reasoning. tasks. For developing the target model, card
sorting, interviews and focus groups were
pro2.3. Design Methods posed. Designing and evaluating the user
interfaces can involve focus groups, workshops,
There is only scarce considerations of design and think-aloud studies. There are now
sevmethods for telling designers and developers eral case studies that have used this process
how to design ethical and responsible AI sys- to successfully implement explanations in AI
interfaces [23, 24, 25].</p>
      <p>
        Our work is concerned with investigating
3. The CoFAIR method
design methods for user interfaces that can 3.1. Co-design workshops
help with making the fairness of AI algorithms
transparent, and then help with mitigating To start, CoFAIR comprises a series of
workfairness issues by incorporating user feedback shops to work closely with a limited number
back into the algorithm. of participants to research the topic area, to
develop some designs, and then to test those
designs. To set up these workshops, a
number of considerations will need to be made:
We present here our method to Co-design Fair 3.1.1. Recruitment
AI InteRactions, CoFAIR (Fig. 1). This method
is based on a co-design process [26] which Participants in a workshop should be the
taraims to work closely with users to develop geted users of an AI system. The aim is to
solutions through a participatory design ap- closely involve these participants in
designproach. As other co-design approaches, It is ing a solution that is right for them, and to
characterised by very close involvement of align the design with their requirements. If
a small number of users in all stages of de- there are a number of diferent user groups
signing a solution, in which these users are that are distinct in their background, use cases
empowered to be on equal footing with re- or tasks, then separate workshops should be
searcher and designers. Co-design has been organised for them. The users will not need
successfully adopted to design human-centred to have a detailed technical understanding or
technology in other settings [
        <xref ref-type="bibr" rid="ref13">27, 28</xref>
        ] however any experience with system design or
develhow to use co-design in shaping AI solutions opment, as they will be supported by
researchhas not been investigated yet. ers, designers, and developers. Ideally, they
      </p>
      <p>Our proposed method for responsible AI should be relatively representative of the user
includes a series of co-design workshops with group in terms of background and
demographparticipants/co-designers that focus on user ics. For each workshop, the number of
particresearch, conceptual and detailed design, and ipants should be kept low, between 3-6
peoinitial testing, which is then broadened in a ple, so as to encourage interaction between
ifnal user evaluation stage. participants.
3.1.2. Workshop Aims and Structure involve real or hypothetical scenarios and the
user experiences around the topic. Activities
We suggest that the workshops aim to cover would typically explore problematic aspects
three main steps in user-centred design: user and the challenges that users face in carrying
research, conceptual and detailed design, and out a user task. They would also probe for
testing. User research in these workshops basic understandings and conceptualisations
should investigate the users’ current concep- around the topic of investigation. These can
tualisations and experience within the topic be (but don’t have to be) documented in
perarea, pain points, and high-level needs and sonas, and recent work has shown how these
wants. This user research can be formalised personas can be co-created with co-design
parand communicated through co-created per- ticipants [27, 29].
sonas that reflect the target user group [27, Activities that aim to support design are
29] or could be more informal as simple lists also kept very concrete. Typically this would
of requirements. Conceptual and detailed de- investigate a scenario of use, either real
sign will involve the participants in surfacing or fictitious. As part of conceptual design,
what information and interactions are needed participants would usually be invited to go
to achieve their tasks while also clarifying through the scenario of use and indicate what
how to present this in the user interface. This they would look for, what interactions they
might be documented in storyboards, user would expect, and what information the
sysjourneys, and sketches, or produce scenario- tem would need to communicate. It is
somebased object-action analyses. Last, these de- times helpful to develop storyboards or user
sign should be prototyped, either using low- journeys with co-design participants.
Detailifdelity paper prototypes or more high-fidelity ed design can flesh out design options through
clickable wireframes, and then tested with par- sketches, however, this needs to be carefully
ticipants. supported and scafolded as participants are</p>
      <p>Depending on the complexity of what is often too timid to sketch themselves.
to be designed, these steps need to be spread In testing, a prototype, often created or
reover a series of workshops. Most naturally, ifned by a designer/developer ofline, is
exthese steps suggest three sequential work- posed to evaluation by co-design participants.
shops, each with a distinct focus on user re- Again, a real or fictitious scenario is used to
search, design, and testing. It might be possi- explore how the prototype might be used and
ble to combine user research and design, and what improvements are necessary for a
subthus reduce the number of workshops to two. sequent iteration.</p>
      <p>However, more iterations might be needed to
explore design options and iteration of
prototypes, and thus more workshops might need 3.2. Broader Evaluation
to be scheduled. Our method is flexible enough
to accommodate this.
3.1.3. Workshop Activities
To achieve the aims of the workshops,
codesign usually proceeds with group-based,
hands-on activities and discussion around
these activities. For user research, these could
A common criticism of co-design is the
limited number of participants that are involved
in developing a solution. This leads to the
fear that while the solution is optimally
adapted to the 3-6 participants in the workshops,
it is unsuitable for the wider user population.</p>
      <p>Our method suggests that co-design is always
followed by broader evaluation of the
designed system through evaluations with users.</p>
    </sec>
    <sec id="sec-3">
      <title>4. Method Case Study:</title>
    </sec>
    <sec id="sec-4">
      <title>Loan Application</title>
    </sec>
    <sec id="sec-5">
      <title>Fairness</title>
      <p>This can take various forms, such as think- in Japan through social media and personal
aloud user testing or large-scale crowd-sourc- contacts. All participants’ ethnicities broadly
ed system use. reflected the population of the country, and
most participants had been educated to a
Bachelor degree level. We paid an incentives of
£40, or equivalent in the local currency.
4.1.2. Workshop procedure
For each country, we held 2 co-design
workIn order to show how our method can be in- shops; these two workshops were 3 weeks
stantiated in practice, we present a case study apart. Both workshops lasted 2 hours.
in which we investigated how to develop user In workshop 1, we conducted user research
interfaces that allow users to explore the fair- and conceptual design. For the user research
ness of AI loan application decisions. Loan part, we investigated how participants
definapplications decisions are increasingly being ed fairness, and then how they explored
fairautomated or supported using AI models (typ- ness in loan decisions. For investigating how
ically, employing logistic regression). This participants viewed AI fairness, we first got
study targeted three diferent user groups in participants to tell us about their own
experitwo iterations: non-expert members of the ences of fair or unfair decisions that afected
public (iteration 1), loan oficers and data sci- them, especially if they encountered AI in that
entists (iteration 2). Iteration 1 details how decision-making. We then also probed them
we instantiated the method with non-expert to consider fairness of using AI systems in
customers, while iteration 2 is concerned with hiring or making medical decisions and what
the method used with loan oficers and data makes AI systems fair or unfair.
scientists. We focus on the techniques em- To continue user research and start on
conployed in our method; we will report on the ceptual design, we constructed an activity
inifndings of these studies elsewhere. volving four fictitious loan application
scenarios (Fig. 2). This allowed us to further
in4.1. Iteration 1: Non-expert vestigate what attributes and information they
Members of the Public were looking for to assess the fairness of the
applications’ outcomes and potentially what
We ran a series of co-design workshops with they would change to make the decisions fairer.
a total of 12 participants in the USA, UK, and Each scenario was discussed in turn, whether
Japan. Because of COVID-19 restrictions we it was fair, why (based on the information
inhad to change our planned face-to-face work- cluded in the application or their experience
shops to be conducted entirely online. of the decisions they had seen), and what
information would have been useful for them
4.1.1. Co-Design participants to assess fairness better. We changed some
of the application scenario details to localize
We recruited 3 participants (2 women, 1 man, them to each country (e.g. names, currency,
mean age 47.3) for the co-design workshops dates) but otherwise kept them the same. We
held in the USA, 5 participants (3 women, 2 showed participants information that is
usumen, mean age 34.2) in the UK, and 4 par- ally collected as part of a loan application
proticipants (3 women, 1 man, mean age 33.75) cess, based on the application form of a
wellknown international bank. Application 1 formation that they used for fairness
assess(USA/UK: Mark Benson or Kazufumi Taka- ments and requests for further information
hashi) was always approved, as it was a ’safe’ obtained in workshop 1 to interface design
application, with a homeowner with a very elements, and we did not involve participants
good credit score applying for a small loan to in detailed design activities.
buy a used car. Application 2 (USA/UK: Sadia In workshop 2, we moved on to a testing
Mohammed or Chihe Pak) was rejected, as it activity. We structured our discussion on the
was a more ‘risky’ application with low in- clickable wireframes, and developed some
scecome, part-time job and low credit score. We narios to explore fairness using the clickable
also included her application to investigate prototype. Going through each screen’s
funcany potential minority or age biases. Appli- tionality, we discussed what helped to
undercation 3 (USA/UK: Jennifer Clary or Maika stand if the application decisions were fair,
Suzuki) was also rejected but crucially her what additional information would they like
details were very similar to Mark Benson. This to determine fairness, and what feedback they
was to introduce an application that seemed, would like to give to mitigate fairness.
without any further information, to be
blatantly unfair. Finally, application 4 (USA/UK: 4.1.3. Broader evaluation
Kwame Odejima or D u˜ng Nguyên,) was
accepted although it seemed more ‘risky’. Following the co-design workshops, we
im</p>
      <p>After the workshop, two researchers re- plemented an improved interface. We then
viewed the workshop recordings and anal- set up an online study to investigate how this
ysed the participants’ definitions of AI fair- prototype is employed by end-users to assess
ness and how they thought AI could be made the fairness of an AI system, and how
sugfairer. For each scenario, we analysed what gested changes to the model afect fairness.
criteria they used to assess fairness, how they We recruited 388 participants (129 female,
were using information to explore fairness, 256 male, 2 Other and 1 preferred not to say)
and what other information they wanted to through Prolific 2, an online research platform,
be able to assess whether a loan application and paid them £3.50 for an expected 30-minute
decision was fair, or potentially biased. Based session. About half of of our participants had
on this analysis, we constructed clickable wire- some programming experience and
familiarframes to instantiate their input in an inter- ity with AI, machine learning or statistics, and
face. We did this by carefully mapping in- 2https://www.prolific.co/
146 participants had at least a Bachelor de- [30].
gree. On study completion, we analysed the
in</p>
      <p>We asked participants to interact with the teractions with the prototype to evaluate
interface to a assess the fairness of an AI sys- whether this prototype was efective in
suptem. Instead of using an open-source dataset, porting users in exploring the fairness of an
the AI system we developed was based on AI model.
an anonymized loan decisions dataset we
obtained from Intesa Sanpaolo. This dataset con- 4.2. Iteration 2: Loan Oficer and
tains decisions made on 1000 loan applica- Data Scientists
tions and has 35 attributes including the label
of whether the loan application was accepted 4.2.1. Co-design Participants
or rejected. These attributes include
demographic information of the applicant (age, gen- This iteration was focused on exploring how
der, nationality, etc), financial information to support loan oficers and data scientists to
(household income, insurance, etc), loan in- explore the fairness of loan application
deformation (amount of loan requested, purpose cisions. These two stakeholder groups are
of loan, loan duration, monthly payments, etc), diferent: loan oficers typically act as
interas well as some information of their financial mediaries between the bank and customers
and banking history (years of service with and had practical experience of loan decision
the bank, etc). There were also some attributes making, while data scientists have experience
that related to internal bank procedures, such in modelling and supporting and/or
investias a money laundering check and a credit score gating customer application decisions. For
developed by the bank. We developed a lo- this study, we recruited six loan oficers (5
gistic regression model after removing sparse men, 1 woman, mean age 36.5) and six data
values, or where multiple attributes had simi- scientists (3 men, 3 women, mean age 29.7)
lar values; the accuracy of the resulting model through Intesa Sanpaolo.
was 0.618. Note that the model was unfair
with respect to the nationality attribute: ’for- 4.2.2. Workshop Procedure
eign’ applicants tended to be rejected more Due to Covid-19 and logistical limitations, all
frequently than citizens, using disparate im- interactions with the users were conducted
pact as a fairness metric. online. We structured the activities into two</p>
      <p>The evaluation consisted of a brief pre-ques- workshops, each lasting 2 hours. Both
worktionnaire and tutorial, 20 minutes of free use shops were repeated for each separate
stakeof the interface to assess fairness, and a post- holder group.
questionnaire. To evaluate the use of this pro- As with the previous iteration, the aim of
totype we captured participants’ ratings of workshop 1 was to conduct user research into
the AI fairness and key interactions with the how fairness was perceived by these user
user interface where logged. We also asked groups, and to carry out initial conceptual
dethem to describe in their own words what sign. Workshop 1 started of by discussing
strategies they used to assess the fairness of the aspects that make decisions in loan
apthe system, any systematic fairness issues they plications fair or unfair to get an insight on
had spotted, and their views on suggesting participants’ loan application experience and
changes and addressing fairness. We then fin- unfair scenarios that they may have come up
ished the study by asking them to rate their against. This was followed by how AI could
task load using the NASA-TLX questionnaire
impact loan application decision-making and loan amount and number of instalments in
fairness. detail.</p>
      <p>To further our user research and also un- The aim of workshop 2 was to informally
derstand what key information is important test the clickable wireframe. This wireframe
to use in conceptual design, we then intro- was screen shared and the researcher ’drove’
duced an activity to explore the anonymized the interactions with it and acted as an
loan decisions dataset we obtained from In- extension on the participants’ behalf,
clicktesa Sanpaolo. The dataset was sent ahead ing through it. The researcher then stepped
of the workshop so that participants could through it with the respective user groups,
have time to look at it and have it available on and probed whether they understood how it
their computers during the session. The dis- worked, wheth-er the information was
usecussion elicited information on participants’ ful for exploring fairness, or what could be
process, information needs, and the function- improved.
ality required to develop an interface. To help Analysis of the second workshop
investiparticipants investigate the dataset, a data vi- gated changes that needed to be made to
imsualisation tool was created which was used prove the clickable prototype for broader
evalto present the dataset should participants re- uation. Based on this analysis the researchers
quire it. It provided the ability to slice the fea- designed a prototype (Fig. 4).
tures on the fly and present them using
various chart types such as histograms, scatter 4.2.3. Broader Evaluation
plots, bar graphs and a strip plot.</p>
      <p>Next, we introduced an activity to reflect The evaluations were conducted as
one-toon a causal graph, showing causal relation- one user tests, unlike the workshops in the
ships between the dataset attributes. This previous phase. A total of 17 participants were
causal graph was derived through automatic recruited through Intesa Sanpaolo: 8 loan
ofdiscovery, showing how attribute values and ifcers (5 men, 3 women, mean age 38) and 9
the loan application decisions are related to data scientists (5 men, 4 women, mean age
each other. Through this activity we aimed to 31.8). All participants held a master’s degree
understand how these users might interpret or higher.
the causal graph and how this might be em- We developed ten tasks for participants to
ployed in exploring the dataset for fairness. go through the prototype, from setting up the</p>
      <p>After the first workshops, a researcher anal- dataset to explore to investigating the dataset,
ysed the audio recordings to derive findings using diferent components of the user
interabout how these user groups judged whether face. The study concluded with a
post-quesloan applications were fair, how these users tionnaire used to evaluate users’ experience.
explored the dataset to determine fairness, and This questionnaire comprised ratings aimed
how they interpreted the causal graph. Based at quantifying how efective the prototype was
on this analysis the researcher developed a in supporting users in assessing fairness
inclickable wireframe to be used in workshop cluding information, functionality and
rea2 (Fig. 3). Again, we did not involve the users soning, free comments to express their
feedin detailed design. Due to implementation back about the prorotype, and the NASA TLX
constraints, we only made a selection of the questionnaire [30].
wireframe interactive, and focused on a sce- The broader evaluation was analysed as to
nario in which to explore the relationships what worked well and what did not, in order
between citizenship, gender, credit risk level, to develop functioning interfaces in future.</p>
    </sec>
    <sec id="sec-6">
      <title>5. Discussion</title>
      <p>design activities and compressed them into
two workshops of two hours each. Ideally
We have gained some experience from apply- we would like to extend them to span three
ing co-design in other application domains, workshops and for a longer duration.
Secand through a case study where we imple- ond, facilitation of online discussions is very
mented the CoFAIR method to develop inter- dificult, and ideally we would have brought
faces for exploring fairness. This showed that users together to discuss this more freely
facethis method can be successfully employed to to-face. Last, we would have liked to involve
design interfaces for responsible AI systems. users much more in conceptual and detailed
However, we encourage other researchers and design, for example, through sketching or
papractitioners to adopt this method and gen- per prototyping but this is very dificult to do
erate more data points to improve this ap- virtually.
proach, and also to validate it. In addition, We can also note some general limitations
CoFAIR was so far employed under COVID- of the CoFAIR method which should be
con19 restrictions which meant that all workshop sidered before it is chosen as a design approach.
activities and testing had to be conducted re- First, as with all co-design there is a danger
motely online, which impacted what we were that interfaces are developed that only fit the
able to do. If we had not been placed in this small number of people that were involved
situation, we would have made diferent choic- as users in the workshops. This can be
allevies as how to conduct the workshops. First, ated through conducting broader evaluations
due to the online nature we shortened the co- that ensure that the designs are fit for
purpose. Second, it is not a ’discount’ methodol- of working. How to successfully mitigate
fairogy that is fast and easy to apply. Implement- ness issues, especially through a
human-ining it requires several lengthy workshops with the-loop approach, is still an open research
users to be organised, separated in time so question.
that researchers and designers can analyse We believe that our method is another step
and produce new materials in subsequent ac- to strengthen the design of responsible and
tivities. This means that even relatively small ethical AI. A major advantage of CoFAIR is
projects can spread over several months, from that it produces designs and interfaces that
initial recruitment of users to a fully refined focus heavily on what specific target users
and evaluated interface. Because we want to need and want. It thus produces
’shrink-wrapguard against ’overfitting’ designs to small ped’ interfaces that should be eminently
suitnumbers of participants, it is not advisable to able to communicate with a specific user group.
cut short this process and skip the broader Taken together, this method could be easily
evaluation to save on time. Last, this method extended to investigate what and how to
exfocuses very much on the mental model of plain machine learning systems, in order to
users and does not account for the input of design more responsible and ethical AI
sys’experts’ or consider how people should rea- tems.
son. Hence, it is possible that we might build
in possible biases that users have back into
these interfaces, and only support current ways</p>
    </sec>
    <sec id="sec-7">
      <title>6. Conclusion</title>
      <p>Personas: Engaging and Empowering
Users with Diverse Needs Within the
Design Process, in: Proceedings of the
2019 CHI Conference on Human
Factors in Computing Systems, CHI ’19,
ACM, New York, NY, USA, 2019, pp.
650:1–650:12. URL: http://doi.acm.org/
10.1145/3290605.3300880. doi:10.1145/
3290605.3300880, event-place:
Glasgow, Scotland Uk.
[30] S. G. Hart, L. E. Staveland,
Development of NASA-TLX (Task Load Index):
Results of Empirical and Theoretical
Research, in: Peter A. Hancock and
Najmedin Meshkati (Ed.), Advances
in Psychology, volume Volume 52
of Human Mental Workload,
NorthHolland, 1988, pp. 139–183. URL:
http://www.sciencedirect.com/science/
article/pii/S0166411508623869.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          10.1145/3313831.3376301. doi:
          <volume>10</volume>
          .1145/ R. Caruthers, Everyday Ethics for
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          3313831.3376301.
          <string-name>
            <surname>Artifical</surname>
            <given-names>Intelligence</given-names>
          </string-name>
          ,
          <year>2019</year>
          . URL: [8]
          <string-name>
            <given-names>Q.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Steinfeld</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Rosé</surname>
          </string-name>
          , J. Zim- https://www.ibm.com/watson/assets/
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <surname>why</surname>
          </string-name>
          , and
          <article-title>how human-ai interaction is</article-title>
          [14]
          <string-name>
            <given-names>Q. V.</given-names>
            <surname>Liao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Gruen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Miller</surname>
          </string-name>
          , Question-
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <article-title>ceedings of the 2020 CHI Conference for Explainable AI User Experiences</article-title>
          , in:
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <source>on Human Factors in Computing Sys- Proceedings of the 2020 CHI Confer-</source>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <string-name>
            <surname>tems</surname>
          </string-name>
          , CHI '20,
          <string-name>
            <surname>Association for</surname>
          </string-name>
          Com- ence on Human Factors in Computing
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <string-name>
            <surname>puting Machinery</surname>
          </string-name>
          , New York, NY, USA, Systems, CHI '20,
          <string-name>
            <surname>Association for</surname>
          </string-name>
          Com-
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <year>2020</year>
          , p.
          <fpage>1</fpage>
          -
          <lpage>13</lpage>
          . URL: https://doi.org/ puting Machinery, New York, NY, USA,
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          10.1145/3313831.3376301. doi:
          <volume>10</volume>
          .1145/
          <year>2020</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>15</lpage>
          . URL: https://doi.org/
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          3313831.
          <fpage>3376301</fpage>
          . 10.1145/3313831.3376590. doi:
          <volume>10</volume>
          .1145/ [9]
          <string-name>
            <given-names>L. E.</given-names>
            <surname>Holmquist</surname>
          </string-name>
          , Intelligence on Tap:
          <volume>3313831</volume>
          .
          <fpage>3376590</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          <string-name>
            <given-names>Artificial</given-names>
            <surname>Intelligence As</surname>
          </string-name>
          a New De- [15]
          <string-name>
            <surname>High-Level Expert</surname>
          </string-name>
          Group on Artificial
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          <string-name>
            <surname>sign Material</surname>
          </string-name>
          , interactions
          <volume>24</volume>
          (
          <year>2017</year>
          )
          <article-title>Intelligence</article-title>
          , Ethics Guidelines for
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          28-
          <fpage>33</fpage>
          . URL: http://doi.acm.
          <source>org/10</source>
          .1145/
          <string-name>
            <surname>Trustworthy</surname>
            <given-names>AI</given-names>
          </string-name>
          , https://ec.europa.eu/
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          3085571. doi:
          <volume>10</volume>
          .1145/3085571. digital-single-market/en/news/ethics[10]
          <string-name>
            <given-names>S.</given-names>
            <surname>Amershi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Weld</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Vorvoreanu, guidelines-trustworthy-</article-title>
          <string-name>
            <surname>ai</surname>
          </string-name>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          <string-name>
            <given-names>A.</given-names>
            <surname>Fourney</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Nushi</surname>
          </string-name>
          , P. Collisson, [16]
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Madaio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Stark</surname>
          </string-name>
          , J. Wort-
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          <article-title>in: Proceedings of the 2019 CHI Confer- nities around Fairness in AI</article-title>
          , in: Pro-
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          <article-title>ence on Human Factors in Computing ceedings of the 2020</article-title>
          CHI Conference
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          <string-name>
            <surname>Systems</surname>
          </string-name>
          , CHI '19,
          <string-name>
            <surname>Association for</surname>
          </string-name>
          Com- on
          <source>Human Factors in Computing Sys-</source>
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          <string-name>
            <surname>puting Machinery</surname>
          </string-name>
          , New York, NY, USA, tems, CHI '20,
          <string-name>
            <surname>Association for</surname>
          </string-name>
          Com-
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          <year>2019</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>13</lpage>
          . URL: https://doi.org/ puting Machinery, New York, NY, USA,
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          10.1145/3290605.3300233. doi:
          <volume>10</volume>
          .1145/
          <year>2020</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>14</lpage>
          . URL: https://doi.org/
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          3290605.
          <fpage>3300233</fpage>
          . 10.1145/3313831.3376445. doi:
          <volume>10</volume>
          .1145/ [11]
          <string-name>
            <given-names>M.</given-names>
            <surname>Mitchell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Zaldivar</surname>
          </string-name>
          ,
          <volume>3313831</volume>
          .
          <fpage>3376445</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          <string-name>
            <given-names>P.</given-names>
            <surname>Barnes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Vasserman</surname>
          </string-name>
          , B. Hutchin- [17]
          <string-name>
            <given-names>T.</given-names>
            <surname>Kulesza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Burnett</surname>
          </string-name>
          , W.-K. Wong,
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          <string-name>
            <surname>Fairness</surname>
          </string-name>
          , Accountability, and
          <article-title>Trans- ings of the 20th International Confer-</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          <string-name>
            <surname>parency</surname>
          </string-name>
          , FAT* '
          <volume>19</volume>
          ,
          <article-title>Association for ence on Intelligent User Interfaces</article-title>
          , IUI
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          <string-name>
            <given-names>Computing</given-names>
            <surname>Machinery</surname>
          </string-name>
          , New York, NY, '
          <volume>15</volume>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New York, NY, USA,
          <year>2015</year>
          ,
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          <string-name>
            <surname>USA</surname>
          </string-name>
          ,
          <year>2019</year>
          , pp.
          <fpage>220</fpage>
          -
          <lpage>229</lpage>
          . URL: https: pp.
          <fpage>126</fpage>
          -
          <lpage>137</lpage>
          . URL: http://doi.acm.org/
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          //doi.org/10.1145/3287560.3287596. 10.1145/2678025.2701399. doi:
          <volume>10</volume>
          .1145/
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          <source>doi:10.1145/3287560.3287596. 2678025</source>
          .2701399. [12]
          <string-name>
            <surname>People + AI Guidebook</surname>
          </string-name>
          ,
          <year>2019</year>
          . [18]
          <string-name>
            <given-names>V.</given-names>
            <surname>Bellotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Edwards</surname>
          </string-name>
          , Intelligibility
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          guidebook. erations in Context-aware
          <string-name>
            <surname>Systems</surname>
            , [13]
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Rossi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Sekaran</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Spohrer</surname>
          </string-name>
          ,
          <string-name>
            <surname>Hum</surname>
          </string-name>
          .-Comput. Interact.
          <volume>16</volume>
          (
          <year>2001</year>
          )
          <fpage>193</fpage>
          -
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          212. URL: http://dx.doi.org/10.1207/ Angeles,
          <year>2019</year>
          . URL: http://ceur-ws.org/
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          <volume>S15327051HCI16234</volume>
          _
          <fpage>05</fpage>
          . doi:
          <volume>10</volume>
          .1207/ Vol-
          <volume>2327</volume>
          /.
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          <volume>S15327051HCI16234</volume>
          _
          <fpage>05</fpage>
          . [24]
          <string-name>
            <given-names>M.</given-names>
            <surname>Ribera</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Lapedriza</surname>
          </string-name>
          , Can we do bet[19]
          <string-name>
            <given-names>B. Y.</given-names>
            <surname>Lim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. K.</given-names>
            <surname>Dey</surname>
          </string-name>
          ,
          <article-title>Investigating ter explanations? A proposal of User-</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          <source>aware Applications</source>
          ,
          <source>in: Proceedings able Smart Systems (ExSS)</source>
          ,
          <year>2019</year>
          , p.
          <fpage>7</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          of the 13th International Conference URL: http://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>2327</volume>
          /.
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          <string-name>
            <surname>on Ubiquitous</surname>
            <given-names>Computing</given-names>
          </string-name>
          , UbiComp [25]
          <string-name>
            <given-names>G.</given-names>
            <surname>Wiegand</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Schmidmaier</surname>
          </string-name>
          , T. We-
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          '11,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New York, NY, USA,
          <year>2011</year>
          , ber, Y. Liu,
          <string-name>
            <given-names>H.</given-names>
            <surname>Hussmann</surname>
          </string-name>
          , I Drive
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          pp.
          <fpage>415</fpage>
          -
          <lpage>424</lpage>
          . URL: http://doi.acm.org/ - You Trust: Explaining Driving Be-
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          10.1145/2030112.2030168. doi:
          <volume>10</volume>
          .1145/ havior Of Autonomous Cars, in: Ex-
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          2030112.2030168.
          <article-title>tended Abstracts of the 2019 CHI Con</article-title>
          [20]
          <string-name>
            <given-names>B. Y.</given-names>
            <surname>Lim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. K.</given-names>
            <surname>Dey</surname>
          </string-name>
          , Toolkit to Sup- ference
          <source>on Human Factors in Comput-</source>
        </mixed-citation>
      </ref>
      <ref id="ref41">
        <mixed-citation>
          <article-title>port Intelligibility in Context-aware ing Systems</article-title>
          ,
          <source>CHI EA '19</source>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New
        </mixed-citation>
      </ref>
      <ref id="ref42">
        <mixed-citation>
          <string-name>
            <surname>Applications</surname>
          </string-name>
          , in: Proceedings of York, NY, USA,
          <year>2019</year>
          , pp.
          <source>LBW0163</source>
          :
          <fpage>1</fpage>
          -
        </mixed-citation>
      </ref>
      <ref id="ref43">
        <mixed-citation>
          <article-title>the 12th ACM International Con- LBW0163:6</article-title>
          . URL: http://doi.acm.org/
        </mixed-citation>
      </ref>
      <ref id="ref44">
        <mixed-citation>
          <source>ference on Ubiquitous Computing</source>
          ,
          <volume>10</volume>
          .1145/3290607.3312817. doi:
          <volume>10</volume>
          .1145/
        </mixed-citation>
      </ref>
      <ref id="ref45">
        <mixed-citation>
          <source>UbiComp '10</source>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New York, NY,
          <volume>3290607</volume>
          .3312817, event-place: Glas-
        </mixed-citation>
      </ref>
      <ref id="ref46">
        <mixed-citation>
          <string-name>
            <surname>USA</surname>
          </string-name>
          ,
          <year>2010</year>
          , pp.
          <fpage>13</fpage>
          -
          <lpage>22</lpage>
          . URL: http: gow, Scotland Uk.
        </mixed-citation>
      </ref>
      <ref id="ref47">
        <mixed-citation>
          //doi.acm.
          <source>org/10</source>
          .1145/1864349.1864353. [26]
          <string-name>
            <given-names>E. B.-N.</given-names>
            <surname>Sanders</surname>
          </string-name>
          , P. J. Stap-
        </mixed-citation>
      </ref>
      <ref id="ref48">
        <mixed-citation>
          <source>doi:10.1145/1864349</source>
          .1864353. pers, Co-creation and the new [21]
          <string-name>
            <given-names>B. Y.</given-names>
            <surname>Lim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. K.</given-names>
            <surname>Dey</surname>
          </string-name>
          , D. Avrahami, landscapes of design, CoDe-
        </mixed-citation>
      </ref>
      <ref id="ref49">
        <mixed-citation>
          <article-title>Why and why not explanations sign 4 (</article-title>
          <year>2008</year>
          )
          <fpage>5</fpage>
          -
          <lpage>18</lpage>
          . URL: http://
        </mixed-citation>
      </ref>
      <ref id="ref50">
        <mixed-citation>
          <article-title>improve the intelligibility of context- dx</article-title>
          .doi.org/10.1080/15710880701875068.
        </mixed-citation>
      </ref>
      <ref id="ref51">
        <mixed-citation>
          <article-title>aware intelligent systems</article-title>
          , ACM, doi:10.1080/15710880701875068.
        </mixed-citation>
      </ref>
      <ref id="ref52">
        <mixed-citation>
          <string-name>
            <surname>Boston</surname>
          </string-name>
          , MA, USA,
          <year>2009</year>
          , pp.
          <fpage>2119</fpage>
          -
          <lpage>[</lpage>
          27]
          <string-name>
            <given-names>A.</given-names>
            <surname>Bourazeri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Stumpf</surname>
          </string-name>
          , Co-designing
        </mixed-citation>
      </ref>
      <ref id="ref53">
        <mixed-citation>2128. URL: http://portal.acm.org/ Smart Home Technology with People</mixed-citation>
      </ref>
      <ref id="ref54">
        <mixed-citation>
          citation.cfm?id=
          <fpage>1518701</fpage>
          .
          <article-title>1519023Źcoll= with Dementia or Parkinson's Disease,</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref55">
        <mixed-citation>
          portalŹdl=ACMŹtype=seriesŹidx= in
          <source>: Proceedings of the 10th Nordic</source>
          Con-
        </mixed-citation>
      </ref>
      <ref id="ref56">
        <mixed-citation>
          <article-title>SERIES260Źpart=seriesŹWantType= ference on Human-Computer Interac-</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref57">
        <mixed-citation>
          <article-title>ProceedingsŹtitle=CHIŹCFID= tion</article-title>
          ,
          <source>NordiCHI '18</source>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New York,
        </mixed-citation>
      </ref>
      <ref id="ref58">
        <mixed-citation>
          31206243ŹCFTOKEN=
          <fpage>35340577</fpage>
          . NY, USA,
          <year>2018</year>
          , pp.
          <fpage>609</fpage>
          -
          <lpage>621</lpage>
          . URL: http:
        </mixed-citation>
      </ref>
      <ref id="ref59">
        <mixed-citation>
          <source>doi:10.1145/1518701</source>
          .1519023. //doi.acm.
          <source>org/10</source>
          .1145/3240167.3240197. [22]
          <string-name>
            <surname>M. K. Lee</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Kusbit</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Kahng</surname>
          </string-name>
          , J. T. doi:
          <volume>10</volume>
          .1145/3240167.3240197,
        </mixed-citation>
      </ref>
      <ref id="ref60">
        <mixed-citation>
          <string-name>
            <surname>igattu</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Psomas</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. D.</surname>
            Pro- [28]
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Wilson</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Roper</surname>
          </string-name>
          , J. Marshall,
        </mixed-citation>
      </ref>
      <ref id="ref61">
        <mixed-citation>
          <volume>3</volume>
          (
          <year>2019</year>
          ). URL: https://doi.org/10.1145/ design languages,
          <source>CoDesign 11</source>
        </mixed-citation>
      </ref>
      <ref id="ref62">
        <mixed-citation>
          3359283. doi:
          <volume>10</volume>
          .1145/3359283. (
          <year>2015</year>
          )
          <fpage>21</fpage>
          -
          <lpage>34</lpage>
          . URL: http://dx.doi.org/ [23]
          <string-name>
            <given-names>C.</given-names>
            <surname>Tsai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Brusilovsky</surname>
          </string-name>
          , Designing Ex-
          <volume>10</volume>
          .1080/15710882.
          <year>2014</year>
          .
          <volume>997744</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref63">
        <mixed-citation>
          <source>planation Interfaces for Transparency doi:10.1080/15710882</source>
          .
          <year>2014</year>
          .
          <volume>997744</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref64">
        <mixed-citation>
          and Beyond, in: Algorithmic Trans- [29]
          <string-name>
            <given-names>T.</given-names>
            <surname>Neate</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bourazeri</surname>
          </string-name>
          ,
          <string-name>
            <surname>A</surname>
          </string-name>
          . Roper,
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>