=Paper= {{Paper |id=Vol-3685/short9 |storemode=property |title=Mutual Configuration: Exploring the Dynamic Interplay of Human-Computer Interaction as a Socio-Technical System |pdfUrl=https://ceur-ws.org/Vol-3685/short9.pdf |volume=Vol-3685 |authors=Jo Herstad,Anders Mørch |dblpUrl=https://dblp.org/rec/conf/avi/HerstadM24 }} ==Mutual Configuration: Exploring the Dynamic Interplay of Human-Computer Interaction as a Socio-Technical System== https://ceur-ws.org/Vol-3685/short9.pdf
                                Mutual Configuration: Exploring the Dynamic Interplay of
                                Human-Computer Interaction as a Socio-Technical System
                                Jo Herstad1, Anders Mørch2∗

                                1 Department of Informatics, University of Oslo, Oslo, Norway

                                2 Department of Education, University of Oslo, Oslo, Norway




                                                Abstract
                                                Computers and humans are composed of different material (biology vs. hardware and software)
                                                but share many similarities at higher levels of abstraction. For example, thought and behavior
                                                can be simulated by computational processes. Alan Turing’s Universal Computer first proposed
                                                in 1936 was designed based on insights of how a human computer went about computing, by
                                                reading, writing, remembering, and following rules. The underlying computer-user framework
                                                was influenced by mathematicians and engineers. In this workshop position paper, we focus on
                                                the use and historical development of the concept of “end user” and the evolution of two seminal
                                                computer-user frameworks, the Universal Computer and the framework proposed by Lucy
                                                Suchman 50 years later to analyse human-computer communication. Our analyses highlight at a
                                                high level the reciprocal nature of computer use over time, and we argue: on the one hand,
                                                machines are becoming more like people and on the other, people are coming to define
                                                themselves more as virtual machines. We highlight similarities and differences of the two
                                                frameworks and suggest some implications for end-user development and human
                                                communication. Our argument is twofold. First, the computer should primarily be a tool for
                                                human use, and not the other way around. Second, we must develop a conceptual framework for
                                                human-computer communication that considers how data from domain-expert computers users
                                                may in the long run lead to end-user conformity, thus approximating the behaviour of machines.

                                                Keywords
                                                end-user, computer user model, HCI evolution, mutual configuration, socio-technical system1



                                1. Introduction: Sociotechnical systems and artificial intelligence
                                We explore the concept of mutual configuration in human-computer interaction (HCI) in
                                this paper, focusing on how computers and humans shape each other over time. This
                                reciprocal relationship is becoming increasingly complex as AI systems become more
                                sophisticated. The conceptualization of the duality of humans and computers goes back to
                                the notion of socio-technical system (STS) systems. The STS concept in the context of
                                information systems (Trist, 1981) has been an influential source for describing, analyzing,
                                and thinking about the relationships between systems and people. Scandinavian



                                Proceedings of the 8th International Workshop on Cultures of Participations in the Digital Age (CoPDA 2024):
                                Differentiating and Deepening the Concept of “End User” in the Digital Age, June 2024, Arenzano, Italy
                                ∗ Corresponding author.

                                   johe@uio.no (J. Herstad); andersm@uio.no (A. Mørch)
                                    0000-0002-1470-5234 (A. Mørch)
                                           © 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).




CEUR
                  ceur-ws.org
Workshop      ISSN 1613-0073
Proceedings
researchers were early adopters of STS thinking outside of UK, applying socio-technical
perspectives in the first Scandinavian participatory design (PD) projects. Kristen Nygaard
provided a conceptual foundation for PD with the notion of multiple perspectives (Nygaard
& Sørgaard, 1985). In the PD community, this led to the concept of mutual learning
(Bratteteig, 1997), which means that system developers must learn from the end users their
professional language and end users must learn from systems developers their
(informatics) language to be able to articulate their needs into requirements specifications.
    A socio-technical system represents a shared perspective where social and technical
elements are intertwined in a reciprocal manner. On the one hand, the work activities are
conditioned and shaped by technical possibilities and constraints, and, on the other hand,
the technical system is shaped by human activities, user abilities, and team goals. This
reciprocal process engages users at multiple levels of participation in a complex system that
may potentially consist of more than two perspectives . Woolgar (1990) and others in the
social sciences used the term “configuring the user.” This phrase illustrates that the
computer “talks back” to us what we intend it to do. In a similar manner researchers in
workplace learning used the term “co-configuration” (e.g., Engeström 2004). The term was
originally developed by scholars in management science to mean an emerging type of work
that generates new forms for learning. Characteristic for co-configuration is that it consists
of “customer-intelligent” products and service combinations, supporting continuous mutual
exchange between customers and developers over a long time (Victor & Boynton, 1998). In
an educational setting with artificial intelligence (AI) based writing aids, some researchers
have begun observing a somewhat disturbing phenomenon, namely that these systems do
not sufficiently encourage students to pursue novelty and instead lead to conformity
(Kukich, 2000). With the latest AI tools, tools based on large language models (LLMs) and
generative AI, mutual configuration has reached a new level. On the one hand, users can
configure these systems by pre-prompting, data training and algorithm tuning. On the other
hand, as humans interact and engage with AI systems, the algorithms and models powering
these technologies constantly learn from our actions, thereby adjusting their capabilities to
better suit our needs in future versions of the AI models. This constant interplay between
human input, AI-driven responses, and adjustment fosters a reciprocal shaping that drives
mutual configuration to new and unforeseen possibilities, some that will be good, and
others that we should avoid.
    To approach the phenomenon in a preconceptual manner, i.e., enabling us in the next
round to formulate research questions and hypotheses, we use the term mutual
configuration. Mutual configuration means the mutual shaping of humans and computers
during computer use. In this position paper we address the question in what ways the
concept of end-user has developed over time in terms of mutual configuration connected
with two seminal computer-user frameworks.
    In parallel to the development of the end-user concept, there has been a steady stream
of literature about what the computer can possibly do, and not do (Dreyfus, 1992). By using
language, thinking and naming the computer as a partner in various human activities, the
end user is configured in relation to this.
2. Human-computer communication frameworks
We use two seminal computer user frameworks to guide our discussion. The Universal
Computer/Turing Machine, which conceptualized the computer as a human performing
calculations, and Suchman’s framework, which conceptualizes the situated nature of human
computer interaction and the importance of social factors, involving two or more humans
interacting within their environment, which includes but not exclusively computers. Turing
had the human operator in mind when he suggested the Universal Computer by modelling
the computing machine on a person doing arithmetic operations with pen and paper
(Turing, 1936; 1950). Suchman criticized the later and refined human information
processing model of the computer (Suchman, 1987). Today's AI systems are even more
versatile and have extended their reach to everyone, not just mathematicians, engineers,
computer scientists, and photocopiers. Therefore, developers aim to create a new and
tighter relationship between humans and computers, which requires some serious
discussions in terms of long-term effects. We provide some steppingstones toward that end.

2.1. Turing’s framework of the Universal Computer
A computer in the 1930s was the name of a human being doing computation such as a loan
officer or bank teller calculating interests in a bank. Turing used this framework of a “human
computer” to describe how a professional specialist operated to propose a new method of
automated calculation, which later became the basic principles behind the Universal
Computer, later named Turing Machine (Turing, 1936). Human computers (e.g. bank
tellers) wrote, read and used exact “programs” or calculating procedures for performing
handheld computation with pencils, papers, and knowledge of basic arithmetic. When for
example computing 255.15 with 34.12, Turing observed that the human computer read the
numbers one by one, wrote numbers back on the paper and performed operations both
horizontally and vertically. Turing discovered that it was possible to describe the process
by using a horizontal strip (a tape) with numbers (and more generally symbols) upon which
the program could read and write the numbers as an automated typewriter. Turing's
framework had an unlimited amount of tape and the means of going back and forth along
the tape to fetch symbols. The physical version of his framework required a finite tape.
    Turing’s Universal Computer is relevant to set the stage for discussing the question on
the use and historical development of the concept of end user. The Turing machine modelled
the image of the human user doing computing, or said in Turing’s own words, “We may
compare a man in the process of computing a real number to a machine which…” (Turing,
1936, p. 231). The comparison of a human doing calculations with a machine is rather direct,
in the same way a human reads and writes, the computer reads and writes. Furthermore,
Turing takes the comparison to a higher level when he says that the human is “in a state of
mind” while doing the calculation, which is also the way he describes the process of
computer calculations (Turing, 1936), pointing forward to the 1950 paper where he asks,
“can machines think?” (Turing, 1950. p. 433). On that basis we can claim that human-
computer communication with a Turing machine is a process of mutual configuration of
human and computer at a very low level of input-output exchange where the end user is a
domain expert (bank teller, logician, or mathematician). The concept of the end user that
emerged with Turing’s seminal work is a person doing calculations like a human operator.
With today’s retrospective eyes, these tasks have gradually been replaced by computers.
   The Turing machine can simulate the logic of any computational process and is a
versatile platform for human-computer configuration. However, even the smallest thing to
create with a Turing machine would take a very long time, referred to as the Turing tar pit
(Perlis, 1982), which contrasts systems that one can modify with fewer options, such as
specialized tools like a coffee cup or a wristwatch. However, some systems that are easy to
modify may not allow for much variation, referred to as over-specialized systems (Hutchins,
Hollan & Norman, 1985; Fischer & Lemke, 1988). This capacity for boundless flexibility in
terms of configurability sets the Universal Turing machine apart from specialized systems
for domain-expert users (Costabile et al., 2003). However, complexity of operating a Turing
machine leads to a problem of balancing between algorithmic computability, domain-
specific tasks, and physical machines.
   By adopting the perspective that not only computer systems and algorithms evolve but
also domain-specific tasks and human-computer interaction (Grudin, 2017), we can begin
to ask questions such as, are the machine becoming more like a partner, straw man (e.g. Big
Tech companies), an information processor, a learner in training, a consumer, a client,
someone who is entertained – or all of this? By calling the computer a “learning partner,”
for example, the end-user will be seen as a learner, or novice in the relation to an expert.
This brings us into the second seminal framework of the computer user, Lucy Suchman’s
ethnographically inspired framework s(Suchman, 1987).

2.2. Suchman’s framework of computer-human interaction
Lucy Suchman created a framework to describe and analyze human-computer
communication. The framework has been influential in the HCI community, partly as a
critique of a cognitive approach to HCI and partly by providing a social foundation for HCI
research, adopting ethnographic, ethnomethodological, and critical approaches to HCI and
AI research (e.g., Bratteteig, 1997; Star & Strauss, 1999). This framework was used
empirically to describe and analyze office workers using a photocopy machine. The core
message of Suchman’s research is that instead of following predefined plans to guide action,
actional guidance emerges in situated action, or in her own words, "That term [Situated
Actions] underscores the view that every course of action depends in essential ways on its
material and social circumstances.” (Suchman, 1987, p. 70). The framework makes explicit
and visible different types of signals, data, and information pertaining to the situated use of
machines based on a theoretical framework obtained from from pragmatist philosophy,
social psychology, and ethnomethodology (Dittrich, 2023). Thus, the concept of the end user
that emerges from Suchman’s research is the interactions among two or more
conversational partners, where the computer is one of them.
   The conceptual framework Suchman constructed is meant for application in empirical
research. It is a protocol for observation and analysis described in a transcription table with
four columns, two related to the user and two to the machine. The table is shown in Figure
1. Suchman used the protocol for analyzing human-computer communication. The four
columns are what is not (1) and is (2) available to the machine, what the machine shows to
users (3), and the design rationale for the respective step (4). The format extended the
analysis format used in ethnomethodology adapted to human-computer interaction and
inspired the interaction analysis method developed (Jordan & Henderson, 1995). A key
finding is that many of the actions (including verbal interactions) issued by humans are not
available to the machine, implying these actions are situated (material or social) contrary
to prevalent cognitive models of HCI, which were narrowly focused because the machine is
“…tracking the user’s actions through a very small keyhole” (Suchman, 2007, p. 11).




Figure 1: Suchman’s (1987) framework of interactions with a an advanced (AI-based)
photocopy machine in terms of user actions and effects (output) of the machine. Rationale
refers to designers’ assumptions regarding intentions and consequences of user actions.

   Suchman wanted to capture the shared understanding that emerges in the conversation
between humans and the expert support system (i.e., embedded in the photocopier). By
shared understanding it means the transitory intermediate products (understandings)
developed in a conversation, which is more than the sum of what any one of the actors
contribute and know on their own. Some form of shared understanding is at play when two
or more actors communicate, but what about when a human interacts with a computer? The
idea that the computer understands or creates an effect that is like or at least comparable
with human understanding was new at the time. The title of the 2nd version of her book
(Suchman, 2007) foregrounds a future where it makes sense to distinguish whether
machines are becoming more like people, or whether people are defining themselves more
as machines.

3. Conversational machines and artificial humans in future frameworks
AI systems have been on the research agenda since the 1950s. However, as Hobbes (1946)
wrote – both corporations and governments may be viewed as Artificially Intelligent
machines or entities, and things that are made by corporations are owned by somebody.
Applying Hobbes’s social contract theory to modern AI systems and Big Tech companies
presents an interesting perspective. Users surrender their data (a form of individual
freedom) to Big Tech companies in exchange for the advantages these technologies offer:
access, convenience, personalization, connectivity, some power, and more. These
companies, in turn, gain an enormous amount of power, knowledge, and control from
possessing and processing this data, much like Hobbes's Leviathan.
   Before the computer, and the telephone, we used many kinds of tools to support our
activities. We listened, talked, discussed, thought, and analyzed. Most of these mediated
activities arouse feelings of joy, excitement, sadness, and wonder. However, we also see
today that computers are becoming more like partners rather than merely tools (Grudin,
2017). As we progress further into areas of complex human-computer interaction mediated
by AI, conversational user interfaces (CUIs) and digital personas, require that the
traditional concept of the “end user” demands re-evaluation. Are we now in a time where
the border between computer and human gradually blurs, and to talk about the computer
configuring the user starts to make sense, instead of or in addition to the more common
notion of the user configuring the computer. CUIs, with advancements in natural language
processing, allow machines to interpret and respond to human communication beyond
simple commands, understanding nuances of context and emotion. Meanwhile, digital
personas present an image of autonomy, which will attract human users by giving them
means of exploring alternative identities, suggesting an interaction more akin to
communication between two (artificial) humans rather than between a human and a tool.
   These advancements signify a shift in the dynamic balance of control between end users
and computers, from the human to the computer, a drift that we believe should be the cause
of some concern. Now, it becomes imperative to revisit multiple frameworks for
understanding the reciprocal nature of human-computer interaction, including and going
beyond the two frameworks we have presented, considering real, pressing issues of ethics,
social responsibility, and the socio-technical implications of evolving technologies and
human-computer relationships. We suggest that a path toward that end lies in identifying
the strengths and shortcomings of previous frameworks while taking advantage of the
potentials that two very different type of intelligent entities, humans and computers,
together offer.
   Based on the ideas presented in this workshop position paper, the list of open issues for
discussion at the workshop could include:

   •   The reciprocal nature of human-computer interaction:
           o In what ways are AI systems shaping human behavior and thought
              processes, and how can we devise a new framework to better understand
              and guide the evolving relationships between humans and computers?
   •   Regarding the role of the end user:
           o As AI becomes more sophisticated, how should we redefine the concept of
              the “end user”?
           o What protections need to be in place for users as the line between human
              (as a behavioral machine) and computer (approaching partner) blurs?
   •   Sociotechnical systems and AI:
           o How does the integration of AI into sociotechnical systems affect social
              structures and relationships?
           o What role do AI systems play in reinforcing or challenging existing social
              hierarchies and norms?
   •   Balancing benefits and risks of advanced AI with the use of EUD techniques:
           o How can we foster an environment in which the advantages of AI can be
              maximized while mitigating risks?
           o   What mechanisms can be set up to weigh the benefits against the potential
               harms of sophisticated AI systems with end-user development techniques?

References
[1] T. Bratteteig, Mutual Learning. Enabling cooperation in systems design, Proc IRIS. Vol.
    20 (1997) 1-20.
[2] M.-F. Costabile, D. Fogli, C. Letondal, P. Mussio, A. Piccinno, Domain-Expert Users and
    their Needs of Software Development, HCI 2003 End User Development Session, June
    2003, Crete, Greece.
[3] J. Dittrich, Re-re-reading Lucy Suchman’s Plans and Situated actions, Blogpost in
    fordes/User Research, Design Methods, Education, Jun 30 (2023),
     URL: https://www.fordes.de/posts/rerereading_suchman_plansActions.html#fn:s
     cription.
[4] H. Dreyfus, What Computers Still Can't Do. A Critique of Artificial Reason, The MIT
     Press, Cambridge, 1992.
[5] Y. Engeström, New forms of learning in co-configuration work, Journal of Workplace
     Learning 16.1-2 (2004) 11–21.
[6] J. Grudin, From Tool to Partner: The Evolution of Human-Computer Interaction,
     Morgan & Claypool, 2017.
[7] T. Hobbes, Leviathan, Basil Blackwell, 1946.
[8] E. L. Hutchins, J. D. Hollan, D. A. Norman, Direct Manipulation Interfaces, Human–
     Computer Interaction 1.4 (1985) 311-338.
[9] K. Kukich, Beyond automated essay scoring, IEEE Intelligent Systems 15.5 (2000) 22-
     27.
[10] K. Nygaard, P. Sørgaard, The Perspective Concept in Informatics, in: G. Bjerknes et al.
     (Eds.), Computers and Democracy, Avebury, Aldershot, UK, 1987, pp. 371–393.
[11] A. J. Perlis, Special Feature: Epigrams on programming, SIGPLAN Not. 17, 9 (Sept.
     1982), 7–13.
[12] S. L. Star, A. Strauss, Layers of Silence, Arenas of Voice: The Ecology of Visible and
     Invisible Work, Computer Supported Cooperative Work 8 (1999) 9–30.
[13] L.Suchman, Plans and Situated Actions: The Problem of Human-Machine
     Communication, Cambridge University Press, New York, 1987.
[14] L. Suchman, Human-Machine Reconfigurations, Cambridge University Press, New York,
     2007.
[15] E. Trist, The Evolution of Socio-technical Systems: A conceptual framework and an
     action research program, Ontario Ministry of Labour, Ontario Quality of Working Life
     Centre, 1981.
[16] A. M. Turing, On Computable Numbers, with an Application to the
     Entscheidungsproblem, Proceedings of the London Mathematical Society, 2 (1936)
     230-265.
[17] A. M. Turing, Computing Machinery and Intelligence, Mind, Volume LIX, Issue 236
     (1950), pp. 433–460.
[18] B. Victor, B., A.C. Boynton, Invented here: Maximizing your organization's internal
     growth and profitability. Harvard Business School Press, Boston, 1998.
[19] S. Woolgar, Configuring the User: The Case of Usability Trials, The Sociological Review
     38.1 suppl (1990), 58–99.