=Paper= {{Paper |id=Vol-3239/paper15 |storemode=property |title=Universal golden rule for human-technology interaction design |pdfUrl=https://ceur-ws.org/Vol-3239/paper15.pdf |volume=Vol-3239 |authors=Mikko Rajanen |dblpUrl=https://dblp.org/rec/conf/stpis/Rajanen22 }} ==Universal golden rule for human-technology interaction design== https://ceur-ws.org/Vol-3239/paper15.pdf
Universal golden rule for human-technology interaction design
Mikko Rajanen
INTERACT Research Unit, University of Oulu, Finland

                 Abstract
                 This paper aims at raising awareness and discussion about ethical dimensions of human-
                 technology design of socio-technical systems in general, the Human-Computer Interaction
                 (HCI) designer responsibility towards users, stakeholders and the society in particular, as well
                 as the raise of dark side of design and the responses of the HCI community to it. This paper
                 identifies four dimensions in human-technology interaction design ethics and proposes a
                 universal golden rule for human-technology interaction design. To sum up these different
                 aspects of ethical design and the responsibilities of a designer, this position paper concludes
                 with a proposed universal golden rule for designing human-technology interactions: Design as
                 easy to use, honest, sustainable, and safe human-technology interactions as you would want
                 others to design for you.

                 Keywords 1
                 Design ethics, human-technology interaction, human-computer interaction, socio-technical
                 systems, universal golden rule

1. Introduction
   What do these cases have in common? a) doctor struggling with difficult to use information system
which consumes valuable time with the patients, b) person booking a flight but not realizing that
selecting country of living will automatically will add an unneeded and unwanted travel insurance to
the booking, c) a person visiting a website begrudgingly accepting all cookies because it is far easier
than trying to select which cookies to accept, and d) a jumbo jet crashing down with fatalities when
pilots mistake autopilot setting. All these real life cases have their origin in the design of the systems,
more specifically the user interfaces and human-technology interaction design of these systems. In these
cases the designers of these user interfaces and interactions have not fulfilled their ethical responsibility
towards their users, stakeholders and the society in general.
   The purpose of this position paper is to raise awareness and create discussion about ethical
dimensions of human-technology design of socio-technical systems in general, the Human-Computer
Interaction (HCI) designer responsibility towards users, stakeholders and the society in particular, as
well as the raise of dark side of design and the responses of the HCI community to it. This paper
identifies four dimensions in human-technology interaction design ethics and proposes a universal
golden rule for human-technology interaction design.
   The socio-technical systems approach consists of complex and interrelated interactions between
technical systems and social systems, where all these systems are aiming towards reaching a common
goal [1]. These complex interactions between individuals and technology must therefore be designed
well enough for these common goals to be reached [2]. The socio-technical HCI design focuses today
on innovative and balanced relations between technology, organization, tasks and users [3]. In addition,
socio-technical HCI has less participatory focus, aiming at designing for organizational capacity
enhancement and the interests of both users and organizational management [3], [4].
   Mumford [5] stated that complex projects have often similar characteristics, as they affect large
numbers of people, who can face very serious problems if the system does not deliver the level of

8th International Workshop on Socio-Technical Perspective in IS Development, August 19-21 2022 (STPIS’22)
EMAIL: mikko.rajanen@oulu.fi
ORCID: 0000-0002-3281-7029
              © 2022 Copyright for this paper by its authors.
              Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
              CEUR Workshop Proceedings (CEUR-WS.org)




                                                                                155
service that these people need, and on time. Understanding and solving these complex problems is the
only way we can design and develop systems that are efficient, effective and innovative [5]. Design
involves making choices and taking risks, as the designers cannot know everything [5]. Therefore, a
critical question in the design field is, how we can make good design decisions for complex situations
and once we have made those design decisions and they have been implemented, how we are able to
live with the results [5]. Not all design errors have grave consequences, but even the minor problems in
human-technology design can create a lot of trouble and time wasted for the users, organizations and
the society in general [6].
    Given the complexity of systems designed and the large number of users they are affecting, designer
cannot generalize designs or design activities and try to follow a blueprint of design with predefined
rules and characteristics, because contexts of use change, users change and the whole system and its
socio-technical context changes. [7] Therefore, ethical design should be seen as an evolutionary
process, and the designer should try to direct this evolution and its steps in an ethical and sustainable
way [7].
    There have been researchers warning about the risks of design activity and the ethical implications
of design (see e.g. [7], [8], [9], [10]). Furthermore, there have been calls for designers having a social
and moral responsibility for the consequences of their actions [7], [8], [11]. Therefore, designers should
be made aware of the ethical implications of the design and the responsibility of the designer towards
users, stakeholders and society through the education of future designers. Raising future ethics-aware
designers as well as educating the current designers is very important, as the technological advances
such as internet of things, artificial intelligence, autonomous driving, 6G and such will certainly
introduce new ethical challenges that the designers have to tackle. Furthermore, there have been calls
for new, contemporary and open perspectives for design of socio-technical systems to ensure that the
new systems would be meaningful to all engaged actors [12]. In this human-centered socio-technical
approach, users and other individuals in organizations and in the society in general should act as active
stakeholders in improving and contributing to their environment [13]. Furthermore, the ideation, design,
and development of new technologies imply that a human influences these technological advances in
order that the technology fits the needs and capabilities of the human and social components [13].
     Ethical questions related to design of complex socio-technical systems are not easy. For example,
would a designer as an expert in human-technology interaction design a user interface for launching
nuclear weapons? If a particular designer would refuse such design task due to personal views of
morality of such task, they then might have to worry that some less capable designer might do the design
of this interaction and thus create possibilities of mistakes and bad design. Therefore, designers as
individuals and as communities need universal rules to act as moral compasses, so that the designers
would know how to make ethically sound decisions in increasingly complex design contexts.
Furthermore, in addition to personal ethics, also social ethics, which focuses on the social arrangements
for decision-making in an iterative design process and how people make decisions collectively, should
be taken into account when formulating universal rules for ethical design.
    In general, there are four requirements in any professional field that define and quality a person as a
professional in that field [14]: 1) There are requirements for extensive intellectual training that involves
mastering a complex body of knowledge, 2) There is an expectation of contribution to society through
professional service, 3) There is an assumption of autonomous judgement in work carried out as a
professional and 4) Following a regulated set of professional behavioral standards which are embodied
in a code of ethics in that particular field. One example of such field is the medicine, where the
Hippocratic Oath have been guiding the work of doctors explicitly and implicitly for many millennia.
The purpose of these four requirements for professional fields is to establish the status of the profession,
to regulate the membership of the profession and to convince the society that the profession is capable
of self-regulating and does not need to be regulated by the society. Individuals make ethical decisions
on individual designs, but designers make decisions collectively on what is considered acceptable,
moral and ethical design in their field of profession, as well as making a policy that designers who
conduct unethical design are not considered recognized members of this field of professionals.
    The professional ICT work in socio-technical context, which includes information systems analysis
and design, software development and HCI design and evaluation among many other things, should
also involve understanding and following ethical code of conduct. It can be argued that HCI
professionals should adopt and enforce professional codes of ethics in their professional work, as their



                                                   156
work influences users, stakeholders and society in the socio-technical context (see e.g. [15]). The
Association for Computing Machinery (ACM) has formulated a professional code of ethics for ICT
work, as they argue that the very future of computing depends not only on technical excellence but also
on ethical excellence [16].
    In order to understand the ethical dimensions of human-technology interaction design, we need to
first take a look into how this interaction is designed and how complex it is, as well as the ethical
implications of design.

2. Ethics of design
    Development of information systems in socio-technical context is never merely a technical issue, as
it also encompasses and affects organization, users and process of change [17]. In order to find out the
best way to develop such a complex system for a specific purpose in a specific context of use, it has to
be first carefully designed. Design is a creative activity where different design possibilities and futures
are evaluated and one of these design possibilities is chosen based on a set criteria based on a careful
evaluation. [10]. Therefore, design is never a straightforward or apparent activity, because if there is no
design choices to be made, then there is no design at all [10]. Because of this degree of freedom to
choose between different possibilities, the designers have to adopt some criteria for this choice and what
from their perspective is best to do [10]. From socio-technical systems perspective, design of socio-
technical systems provides a new perspective and an unparalleled opportunity to improve the quality of
working life and humanism at work, having the possibility to replace tight controls, bureaucracy and
stress with technology that enhances human freedom, democracy and creativity [18].
    Design may be the best place to study ethics in technology, because design affects us all. However,
not all of us are involved in design, and this asymmetry has great importance for the social ethics of
technology. [19]. Therefore, as ethics of design is defined as dealing with what is good and bad, what
is wrong or right, the designers have to constantly make ethical choices, consciously or not [10].
    The famous German industrial designer Dieter Rams [20] codified the principles of good design,
where three design principles directly apply also for ICT and HCI designs in socio-technical context:
1) Good design is honest, meaning that the design is not done with an intent to mislead the user to do
something or prevent from doing something, 2) Good design makes the system, product or service useful
for its users, meaning that the users will benefit from the design, and 3) Good design is understandable,
meaning that the user does not need manuals or training to understand the design and its possibilities.
    Therefore, design of socio-technical systems have an ethical perspective, which can be approached
from general ethical principles, such as the Kant’s moral theory and categorical imperative: “Act only
according to that maxim whereby you can, at the same time, will that it should become a universal law”
([21], p 30), as well as the universal golden rule: “Do to others what you want them to do to you”.
However, these general rules are not very actionable for designers in general and human-technology
interaction designers in HCI field in particular, as there are many perspectives for design ethics.
However, these general rules can be built on and adapted into universal golden rule specifically tailored
for educating past, present and future designers of socio-technical systems.
    While there are many perspectives to address the ethical issues in general, there are three major
traditional approaches for the theoretical dimension of ethics [7]: 1) ethics based on obligation and duty,
doing what is right and proper (deontology), 2) ethics based on maximizing the utility based on
principles and goals (teleology), and 3) ethics based on the role of individual virtues (virtue ethics).
However, no matter which approach is taken to the theoretical dimension of ethics, design ethics should
as far as it is possible be able to foresee and prevent future problems, while being able to address current
problems effectively [7].
    Design ethics should be able to foresee future problems as far as possible, while effectively
addressing current problems [7]. Although designers may not be able to foresee all consequences of
their designs, they should always try to anticipate different ethical scenarios and possible ethical issues,
carefully consider the consequences of their innovations and to make efforts to uncover the values,
potentials, motivations and commitments that different stakeholders bring into the design process [7],
[11], [22]. The designers should keep the past, present and future users in the design loop in order to
understand how things work and to better understand the cause–effect of different actions [7]. Designers




                                                   157
of socio-technical systems should reconcile the social component with the technological one [1], [7].
Designers of socio-technical systems need to pay attention to several implications, many of which are
unexpected, to ensure that the users and other stakeholders are not exposed to risks [7, 22]. However,
the behavior of the users is generally unpredictable and most likely cannot be fully controlled [23].
Therefore, all design activities such as task formulation introduces a moral and ethical aspect because
the design outcomes, such as user tasks, have a direct impact on safety of the users and their perception
of the system [24].
    In order to formulate a universal golden rule that would address the complexity of ethical design,
we need to identify different perspectives of ethical design. First, we take a look on the ease of use.

2.1.    Designing for ease of use
    After the technology advanced and computers became more common in the 1980s, the need to
minimize the resources a user needed to expend to achieve their tasks was identified as an important
concern in the HCI community. Therefore, the concept of usability was expanded to include also ease
of use with the intention of saving human time and labor, which had become more expensive than the
computing time [25]. As the number and complexity of the information systems increased, the HCI
community identified a need for a standardized process to design for better usability and ease of use,
and the process of user-centered design was introduced. The user-centered design advocated for several
small usability design and evaluation activities spanning the entire development process instead of few
larger usability evaluations at the end of the process when the design was already finalized and the
changes would be expensive (see, e.g. [25], [26]). Despite the popular opinion amongst developers and
managers, the outcome of this design of human-technology interaction is not a matter of taste, or a
subjective opinion, but it can and should be objectively evaluated, while user experience should cover
the subjective aspects of human-technology interaction [27].
    The cost of bad design causing problems for users in their everyday work are not easy to calculate
and there are few concrete examples [6], [28], [29]. A real-life example of what kind of impact
seemingly insignificant design choices that affect the ease of use in everyday work of users may have
in larger perspective, an information system was introduced in early 2000s for hospitals in Finland for
recording dictations by medical doctors as part of their routine practice after their appointment with a
patient [29]. However, the human-technology interaction of this system was not designed with ease of
use in mind and for example saving a dictation after each patient required sixty (60) mouse clicks [30].
Therefore, assuming that each selection took at least one second, just saving one dictation will take one
full minute extra time from the medical doctors that could be better used for extra patient appointments.
While exact effects of this problem are difficult to calculate, the worst-case scenario can be estimated
by multiplying this time wasted with numerous clicks with the total amount of medical doctors in
Finland, adding up to 4200 hours or 525 full working days potentially lost every day because of just
this one design problem related to ease of use in just one task performed using just one medical
information system in one country [29]. As the time of medical doctors could be much better spent
interacting with their patients rather than interacting with computer systems, this case highlights the
importance of designing for ease of use in the larger socio-technical context. Another well-known
example of the impact of small design problems to ease of use was the “300 million dollar button”,
where making small changes to human-technology interaction as a result of consulting the users
increased website annual revenues by 300 million dollars [31].
    Therefore, good usability and particularly the ease of use has a profound effect on the level of
interaction the user has with the system, their experience with the system, as well as the overall quality
of the system and its functions [32]. Usability in general and ease of use in particular are important
quality characteristics of software, systems, and services, and they are vital in facilitating the rich
interaction between users and technology, the social systems and technical systems in the socio-
technical context [13]. The concept of usability has been constantly evolving to adapt to the advances
in technology and other emerging needs in the socio-technical landscape and as a result, the focus of
usability research and practice have been constantly expanding (see, e.g. [25]). These different usability
standards (e.g. [33]) act as time capsules, having different approaches, viewpoints and




                                                   158
conceptualizations to usability, thus representing the views and best practices of their time, how the
HCI field as professional community has seen them (see e.g. [34]).
    This brings the fundamental question if HCI and its usability and UX design are all about removing
obstacles and problems to streamline user activities and the interaction between user and technology,
or is it also about pursuing some greater good. Some indication can be found from the core tenets in
HCI field which argue that users are unique and valuable as individuals, as groups and as a society in
general and that it is therefore worth designing user interfaces and tasks for systems that improve the
work and life of these users. The bottom line is that the good design of these socio-technical systems is
the responsibility of the HCI designer, as the users, stakeholders or the society can do very little to
impact the quality of the designed human-technology interaction. The HCI community has been
fighting for easy to use designs and against bad designs and bad usability since the concept of usability
was created in the 1980s. The goal has been to educate designers and other stakeholders and advocating
for good design practices and processes. However, as Le Guin [35] stated: “To light a candle is to cast
a shadow”, and therefore it has not been a surprise that these HCI practices and processes, that have
been created for noble purposes of making the life of users easier, have also started to be used for
unethical and deceiving purposes. Therefore, next we need to look at the importance of honest design
and the rise of dark side of design.

2.2.    Honest design
    One of the principles of good design by Dieter Rams is: Good design is honest [20]. Honest design
means that the design is not done with an intent to mislead the user to do something or prevent the user
from doing something, or to otherwise manipulate the user. However, recently the very processes and
principles of HCI that have been created to help to design good human-technology interaction have
been used to create misleading and deceptive designs. This kind of dark design or deceptive design is
not bad design. Bad design is an honest mistake, where the designer should had known better but ended
up doing bad design out of ignorance or honest oversight. However, dark design or dark pattern is done
with a deliberate intent to mislead or deceit the user [36]. Dark side of design uses the same methods
and processes developed to create good interfaces for the users to design the human-technology
interaction with malicious intent, either causing the users to do something they did not intend to do or
preventing the users from doing something they intended to do. Dark design misuses good and trusted
design principles with malicious intent. Nevertheless, a designer of either bad design or dark design
violates the ethical code and the human-centered values of HCI design community and violates the
designer responsibility towards users, stakeholders and the society.
    There is an interesting conundrum on when exactly persuasive design of systems and human-
technology interaction becomes misleading. Persuasive design can be used to nudge the user into a
particular direction which might be beneficial for the users, such as encouraging user to exercise more,
eat healthier, consume less energy, or not to access suspicious websites. However, there is a point where
persuasion turns into misleading and deceit. It can be argued that dark design or deceptive design is
done in favor of the shareholders to the detriment of the users [37]. Ryanair has been infamous for how
users are misled for choosing a travel insurance when selecting “Please select a country of residence”,
where in an alphabetical list of countries the option “No travel insurance required” was the only way to
avoid mistakenly buying an unnecessary travel insurance [38]. Furthermore, research has shown that
many websites use variety of dark designs or dark patterns to circumvent the intent of GDPR in their
cookie consent dialogues [39].
    There are also better and more ethical ways of persuading and enticing users to buy something
additional, for example by showing what other customers have also been buying when they bought
what this user is interested about. Such ethical design benefits both users, stakeholders and the society.
However, designers of the human-technology interaction in general and the HCI professionals in
particular should address the rise of the dark side of design and find ways to educate both the general
public as well as fellow designers how to identify and avoid deceptive designs. The very reputation of
all designers might be irrefutably tarnished by few bad apples who fall into the dark side. Legislators
and regulators have already been taking notice of the rise of dark design and have started enacting
legislations and regulations defining, restricting and prohibiting deceptive design [40]. Therefore,




                                                   159
designers of human-technology interaction should act fast and show the legislators and the society that
design fields can self-regulate and fight against dark design, before design work starts to be more and
more regulated and restricted. However, not only humans and society are affected by design choices,
but also the environment and our planet in general. Therefore, next we need to look at the ethical
perspective of sustainable design.

2.3.    Sustainable design
    Ethical design can have environmental impact through minimizing the amount of materials required
for software or service, printing or manufacturing of products, as well as minimizing the amount of
waste, hazardous emissions, and use of energy and materials [15]. There have been grave concerns
about sustainability of design [7]. Design work in general has been characterized as the second most
harmful profession that can be practiced, responsible for many difficult, harmful, troubling and
dangerous situations in our world (see e.g. [7], [8], [9], [10], [11]). A design that is useful to its users,
business or society has longer life-cycle, which has a positive impact on the sustainability through
minimizing use of resources and waste [15].
    Manzini [10] identified three principles for designing sustainable solutions: 1) Promote sustainable
wellbeing, 2) Enable people to live as they like, and in a sustainable way, and 3) Enhance social
innovation, and steer it towards more sustainable ways of living. However, there is a tension between
the historical focus on technological novelty and human-technology interaction innovations in HCI
research and practice, and the aims for sustainability and sustainable design [41]. Conversely, it has
been argued that the existing HCI design principles could contribute to other fields of research and
practice, and this kind of multidisciplinary approach could then lead towards achieving the goal of
sustainability, depending on the context and purpose [42]. However, the roles of human-technology
interaction design and HCI design principles in sustainable design should be studied further.
    But all these other perspectives of ethical design mentioned above do not matter much if the design
can cause danger and threat to the wellbeing of users and other stakeholders. Therefore, next we take a
look at the safety perspective of human-technology interaction design.

2.4.    Safe design
    The safety implications of designing human-technology interactions have been highlighted by many
well-known accidents in different fields. First such example of designer responsibility was Air Inter
Flight 148 preparing for landing [43]. The captain configured the autopilot for a slight descent angle,
so that the crew could make their landing preparations and checklists. Moments later the plane crashed
into a mountain and the crew did not have enough time to react and save the plane and its occupants.
The investigation found out that the autopilot of the plane had confusing user interface and it was easy
to mix two autopilot modes: flight path angle (FPA) or vertical speed (V/S). The captain thought that
he had selected the FPA mode, but in reality the autopilot was switched to V/S mode, and the only
visual difference between these modes were small letters and a dot between numbers in FPA mode [43].
The flight crew thought that the plane was descending at normal -3.3 degree flight path angle, but in
reality it was descending 3300 feet per minute, which was much faster than intended. Therefore, the
crash was inevitable and there was nothing that the flight crew could do to save the situation when they
became aware of the danger. Later, when these conditions were tested in flight simulator, most of the
crews missed the wrong autopilot setting and therefore inevitably crashed the plane, no matter how
experienced the crew was [43].
    Second example of seemingly innocent design choice for human-technology interaction resulting in
almost a disaster was again the autopilot user interface for Loganair Flight 6780 [44]. The pilots were
struggling to gain control after the plane was struck by lightning and the plane entered into a steep dive.
The pilots were able to recover the plane only seven seconds before hitting the ocean. One of the
identified root causes for this serious incident was the design of user interface for autopilot status as
well as the design of the human-technology interaction of the autopilot itself [44]. The pilots thought
that the autopilot was off after the lighting strike, but unknown to them the Saab 2000 plane was one of
the extremely few planes where pilot inputs to flight controls do not turn off the autopilot. Furthermore,



                                                    160
the autopilot status was only indicated by a small “AP” text in the flight display. The text was green
when the autopilot was on, but the text remained visible as white text when the autopilot was off.
Therefore, it was extremely difficult for the pilots to establish the true status of the autopilot during an
emergency, the pilots were not aware that the autopilot would not disengage when pilots inputted flight
commands, and thus the pilots were trying to fight against the autopilot which was programmed for
approach into landing. A total disaster was only avoided because rapid descent introduced invalid data
into the flight computer, which finally turned off the autopilot in the nick of time [44].
    Third such case was Therac-25, a radiotherapy system, where the designers did not get to know the
real users, their tasks and working conditions, and they did not test their design with real users and real
tasks [45]. Furthermore, the designers did not take into account that users can do mistakes in the human-
technology interaction. Unfortunately, as a result of this problematic design, at least six people died
because of massive radiation overdoses [46]. As a result, the design and development of such medical
devices was strictly regulated by the society, which could no longer trust the professionals to do self-
regulating.
    Furthermore, increased level of automation in systems, as well as advances in artificial intelligence
and autonomous systems have been raising safety concerns [7]. It has been argued that by proactively
conforming to the regulations, as well as to the ethical and inclusive principles, and by showing safety
mindset, the designers of autonomous driving could show that automated driving do not have to be
heavily regulated by the legislators [47].
    As we have now identified the perspectives of ethical design, we can now move into conclusions,
were a universal golden rule for human-technology interaction design is proposed, and its implications
to research and practice are discussed.

3. Conclusions
    Designers of human-technology interaction in socio-technical context are professionally and
ethically responsible towards 1) users and other stakeholders, 2) companies and other organizations, 3)
society in general, and 4) environment and sustainability. Furthermore, the designers are responsible
for themselves and to the professional body of designers so that their design achieves the desired high
ethical standards, and that the designer has the education and moral inclination to follow good design
practices and not to do any harm for users, stakeholders, society or environment. Designing human-
technology interaction for complex systems in complex socio-technical context carries a risk of failure
(see e.g. [5]). The designers as individuals and as professional fields need universal rules to act as moral
compasses to guide them to do the right design choices and to act as moral backbones to resist having
to make unethical design choices.
    It is important that the HCI professionals, researchers and practitioners continue their fight against
bad design and educating the users, stakeholders and the general public that problems in the human-
technology are not the fault of users and that good design is professional responsibility of human-
technology interaction designers. HCI as a field should continue this good work through education,
training, going back to basics [27]. Additionally, HCI professionals and other design professionals
should also actively start fighting against the rise of dark design and advocating honest design practices.
Users, designers and the society in general should be made aware of the existence of dark design and
dark patterns in design. Furthermore, designers should recognize their ethical responsibility towards
users, stakeholders, society and environment. Starting this discourse and acting on the rise of dark
design is very important for the HCI as a field in the future, if HCI as a design field wants to avoid the
society starting to regulate its design activities like it has happened in design of medical systems.
Curriculums should include courses on design ethics, designer responsibility and the rise of dark side
of design. This theme of design ethics should also be reflected on other substance courses, so that the
future design professionals in HCI and in other fields would understand the value of users and ethical
design, and the dangers of bad design and dark design.
    To sum up these different aspects of ethical design and the responsibilities of a designer, this position
paper concludes with a proposed universal golden rule for designing human-technology interactions:
Design as easy to use, honest, sustainable, and safe human-technology interactions as you would want
others to design for you. This universal golden rule could be discussed about, further refined, and




                                                    161
brought into HCI education and practice. It could highlight the importance of ethical discourse among
HCI educators, practitioners, researchers, as well as users, stakeholders and the society in general. This
kind of golden rule could act both as a moral compass guiding the designers as well as a moral backbone
that the designers could rely on when unethical design is requested. Designers who follow the universal
golden rule for human-technology design could be confident that they have done their best and can live
with the consequences of their design.

4. References
[1] R. P. Bostrom, J.S. Heinen, MIS problems and failures: a socio-technical perspective, part II: the
     application of socio-technical theory, MIS Quarterly, vol. 1, no. 4, pp. 11–28, 1977. Available:
     https://doi.org/10.2307/249019
[2] E. Mumford, Participative systems design: Practice and theory. Journal of Occupational
     Behaviour, 1983. 47-57.
[3] T. Clemmensen, Human Work Interaction Design: A Platform for Theory and Action. Springer.
     2021.
[4] L. Bannon, J. Bardzell, S. Bødker, Reimagining participatory design. Interactions, 26(1), pp. 26-
     32. 2018.
[5] E. Mumford, A socio-technical approach to systems design, Requirements Engineering, vol. 5, no.
     2, pp. 125–133, 2000. Available: https://doi.org/10.1007/PL0001034
[6] M. Rajanen, Applying Usability Cost-Benefit Analysis - Explorations in Commercial and Open
     Source Software Development Contexts. PhD Dissertation. Acta Universitatis Ouluensis Series A
     587. University of Oulu. 2011.
[7] E. Fiore, "Ethics of technology and design ethics in socio-technical systems: Investigating the role
     of the designer." FormAkademisk-forskningstidsskrift for design og designdidaktikk 13.1 (2020).
[8] V. Papanek and R. Buckminster Fuller, Design for the real world. (1972).
[9] J. Thackara, In the bubble, Designing in a complex world, MIT Press, Cambridge (2005).
[10] E. Manzini, Design, ethics and sustainability, Guidelines for a Transition Phase. University of Art
     and Design Helsinki (June) (2006): 9-15.
[11] A. Mink, A., Designing for well-being. An approach for understanding users’ lives in Design for
     Development. Doctoral thesis. Delft University of Technology. (2016).
[12] P. M. Bednar & Welch, C. Socio-technical perspectives on smart working: Creating meaningful
     and sustainable systems. Information Systems Frontiers, 1-18 (2020).
[13] M. Rajanen, D. Rajanen, Usability as Speculum Mundi: A Core Concept in Socio-Technical
     Systems Development. Complex Systems Informatics and Modeling Quarterly (CSIMQ), no 22,
     pp. 49-59. 2020. DOI: 10.7250/csimq.2020-22.04
[14] R. A. Spinello, Case studies in information and computer ethics. Upper Saddle River, NJ. Prentice
     Hall. (1997).
[15] M. Rajanen and D. Rajanen, Usability: A Core Concept in Socio-Technical Systems Development,
     In Proc. of STPIS 2019 @ECIS. 2019.
[16] ACM, ACM Code of Ethics and Professional Conduct (2021) https://ethics.acm.org
[17] E. Mumford, Effective systems design and requirements analysis. Macmillan International Higher
     Education. (1995).
[18] E. Mumford, (2003) Redesigning Human Systems. Hershey, PA: Information Science Publishing.
[19] R. DevonTowards a Social Ethics of Technology: A Research Prospect. Techné: Research in
     Philosophy and Technology, 8(1), 99–115. 10.5840/techne20048112, (2004).
[20] D. Rams, “Dieter Rams: Design by Vitsoe.” (1976)
[21] I. KantGroundwork of the Metaphysic of Morals. Translated by Ellington, James W. (3rd ed.).
     (1993) Hackett. ISBN 0-87220-166-X., (1785)
[22] A. Albrechtslund, Ethics and Technology Design. Ethics and Information Technology 9(1): 63–
     72. doi:10.1007/ s10676-006-9129-8. (2007).
[23] P. A. Kroes, A., Ligh, S. A. Moore and P. E. Vermaas Design in engineering and architecture:
     towards an integrated philosophical understanding. In Vermaas P.E., Kroes, P.A., Ligh, A., Moore




                                                   162
     S.A. (Eds.), Philosophy and Design: From Engineering to Architecture (pp. 1–17). Dordrecht, the
     Netherlands: Springer. (2008).
[24] C. Wilson, Taking usability practitioners to task, interactions 14.1 (2007): 48-49.
[25] M. Rajanen, D. Rajanen, Usability: A Cybernetics Perspective. In Proc. of the 6th International
     Workshop on Socio-Technical Perspective in IS development (STPIS'20). 8-9 June 2020. pp. 28-
     33. 2020b. http://ceur-ws.org/Vol-2789/paper5.pdf
[26] ISO 9241-210, Ergonomics of human-system interaction — Part 210: Human-centred design for
     interactive systems (2019) International standard.
[27] M. Rajanen, De gustibus non est disputandum, but usability is not a matter of taste. In Proc. of
     STPIS 2021. pp. 189-197. (2021).
[28] M. Rajanen, Different Approaches to Usability Cost-Benefit Analysis, Proc. ECITE 2006 (2006):
     391-397.
[29] M. Rajanen, Usability Cost-Benefit Analysis for Information Technology Applications and
     Decision Making. In E. Idemudia (Ed.), Information Technology Applications for Strategic
     Competitive Advantage and Decision Making. IGI Global. DOI: 10.4018/978-1-7998-3351-
     2.ch008 (2020)
[30] National Audit Office of Finland, Käyttäjäystävällisillä tietojärjestelmillä jopa 400 000
     lääkärin vastaanottoaikaa lisää. (2012).
     Retrieved from: https://www.vtv.fi/tiedotteet/kayttajaystavallisilla-
     tietojarjestelmilla-jopa-400-000-laakarin-vastaanottoaikaa-lisaa/
[31] L. Wroblewski, Web form design: filling in the blanks. Rosenfeld Media, 2008.
[32] A. Madan and S. K. Dubey. "Usability evaluation methods: a literature review." International
     Journal of Engineering Science and Technology 4.2 (2012): 590-599.
[33] ISO 9241-11: Ergonomic requirements for office work with visual display terminals (VDTs) -
     Part 11 Guidance on usability (1998)
[34] D. Marghescu, Usability evaluation of information systems: A review of five international
     standards. In Information Systems Development (pp. 131-142). Springer, Boston, MA. 2009.
[35] U. K. Le Guin, A wizard of Earthsea. Houghton Mifflin Harcourt, 2012.
[36] H. Brignull. Dark Patterns: inside the interfaces designed to trick you. 2013.
     http://www.theverge.com/2013/8/29/4640308/dark-patterns-inside-the-interfaces-designed-to-
     trick-you
[37] C. M. Gray, Y. Kou, B. Battles, J. Hoggatt, and A. L. Toombs, The Dark (Patterns) Side of UX
     Design, Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems -
     CHI '18. New York, New York, USA: ACM Press: 1–14. (2018).
[38] C. Barry, M. Hogan, and A. Torres, Perceptions of low-cost carriers’ compliance with EU
     legislation on optional extras. In Information Systems Development (pp. 669-680). Springer, New
     York, NY. (2013).
[39] T. H. Soe, O. E. Nordberg, F. Guribye and M. Slavkovik, Circumvention by design-dark patterns
     in cookie consent for online news outlets. In Proceedings of the 11th nordic conference on
     human-computer interaction: Shaping experiences, shaping society (pp. 1-12). (2020).
[40] A. Mathur, M. Kshirsagar and J. Mayer, What makes a dark pattern... dark? Design attributes,
     normative considerations, and measurement methods. In Proceedings of the 2021 CHI
     conference on human factors in computing systems (pp. 1-18). (2021).
[41] M. S. Silberman, L. Nathan, B. Knowles, R. Bendor, A. Clear, M. Håkansson, M., ... and J.
     Mankoff, Next steps for sustainable HCI. interactions, 21(5), 66-69. (2014).
[42] T. Nyström and M. M. Moyen, Sustainable information system design and the role of sustainable
     HCI, Proceedings of the 18th International Academic MindTrek Conference: Media Business,
     Management, Content & Services. 2014.
[43] Ministry of transport and tourism, Official Report into the accident on 20 January 1992 near
     Mont Sainte-Odile (Bas-Rhin) of the AIRBUS A320 registered F-GGED operated by Air Inter
     [English translation]. Bureau of Enquiry and Analysis for Civil Aviation Safety. 26 November
     1993. ISSN: 1148-4292 (2003)
[44] Air Accidents Investigation Branch, Report on the serious incident to Saab 2000, G-LGNO,
     Approximately 7 nm east of Sumburgh Airport, Shetland,15 December 2014. Aircraft Accident
     Report 2/2016. (2016)



                                                163
[45] N. G. Leveson and C. S. Turner, An investigation of the Therac-25 accidents, Computer 26.7
     (1993): 18-41.
[46] S. Baase, Case Study: The Therac-25, A Gift of Fire: Social, Legal, and Ethical Issues for
     Computing Technology (4th ed.). Pearson Prentice Hall. pp. 425–430. ISBN 978-0132492676.
     (2012).
[47] M. Rajanen, Benefits of Usability and User Experience in Automated Driving. In Proc. of
     Eclipse SAAM Mobility 2021. (2021).




                                               164